<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "journalpublishing3.dtd">
<article xml:lang="en" article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
<?release-delay 0|0?>
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">OL</journal-id>
<journal-title-group>
<journal-title>Oncology Letters</journal-title>
</journal-title-group>
<issn pub-type="ppub">1792-1074</issn>
<issn pub-type="epub">1792-1082</issn>
<publisher>
<publisher-name>D.A. Spandidos</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3892/ol.2020.11576</article-id>
<article-id pub-id-type="publisher-id">OL-0-0-11576</article-id>
<article-categories>
<subj-group>
<subject>Articles</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Ensemble classification for predicting the malignancy level of pulmonary nodules on chest computed tomography images</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Xiao</surname><given-names>Ning</given-names></name>
<xref rid="af1-ol-0-0-11576" ref-type="aff">1</xref></contrib>
<contrib contrib-type="author"><name><surname>Qiang</surname><given-names>Yan</given-names></name>
<xref rid="af1-ol-0-0-11576" ref-type="aff">1</xref>
<xref rid="c1-ol-0-0-11576" ref-type="corresp"/></contrib>
<contrib contrib-type="author"><name><surname>Zia</surname><given-names>Muhammad Bilal</given-names></name>
<xref rid="af1-ol-0-0-11576" ref-type="aff">1</xref></contrib>
<contrib contrib-type="author"><name><surname>Wang</surname><given-names>Sanhu</given-names></name>
<xref rid="af2-ol-0-0-11576" ref-type="aff">2</xref></contrib>
<contrib contrib-type="author"><name><surname>Lian</surname><given-names>Jianhong</given-names></name>
<xref rid="af3-ol-0-0-11576" ref-type="aff">3</xref></contrib>
</contrib-group>
<aff id="af1-ol-0-0-11576"><label>1</label>College of Information and Computer, Taiyuan University of Technology, Taiyuan, Shanxi 030600, P.R. China</aff>
<aff id="af2-ol-0-0-11576"><label>2</label>Department of Computer Science and Technology, Lvliang University, Lvliang, Shanxi 033000, P.R. China</aff>
<aff id="af3-ol-0-0-11576"><label>3</label>Department of Thoracic Surgery, Shanxi Cancer Hospital, Taiyuan, Shanxi 030000, P.R. China</aff>
<author-notes>
<corresp id="c1-ol-0-0-11576"><italic>Correspondence to</italic>: Professor Yan Qiang, College of Information and Computer, Taiyuan University of Technology, 209 Daxue Street, Taiyuan, Shanxi 030600, P.R. China, E-mail: <email>qiangyan@tyut.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="ppub">
<month>07</month>
<year>2020</year></pub-date>
<pub-date pub-type="epub">
<day>27</day>
<month>04</month>
<year>2020</year></pub-date>
<volume>20</volume>
<issue>1</issue>
<fpage>401</fpage>
<lpage>408</lpage>
<history>
<date date-type="received"><day>30</day><month>05</month><year>2019</year></date>
<date date-type="accepted"><day>13</day><month>03</month><year>2020</year></date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2020, Spandidos Publications</copyright-statement>
<copyright-year>2020</copyright-year>
</permissions>
<abstract>
<p>Early identification and classification of pulmonary nodules are essential for improving the survival rates of individuals with lung cancer and are considered to be key requirements for computer-assisted diagnosis. To address this topic, the present study proposed a method for predicting the malignant phenotype of pulmonary nodules based on weighted voting rules. This method used the pulmonary nodule regions of interest as the input data and extracted the features of the pulmonary nodules using the Denoising Auto Encoder, ResNet-18. Moreover, the software also modifies texture and shape features to assess the malignant phenotype of the pulmonary nodules. Based on their classification accuracy (Acc), the different classifiers were assigned to different weights. Finally, an integrated classifier was obtained to score the malignant phenotype of the pulmonary nodules. The present study included training and testing experiments conducted by extracting the corresponding lung nodule image data from the Lung Image Database Consortium-Image Database Resource Initiative. The results of the present study indicated a final classification Acc of 93.10&#x00B1;2.4&#x0025;, demonstrating the feasibility and effectiveness of the proposed method. This method includes the powerful feature extraction ability of deep learning combined with the ability to use traditional features in image representation.</p>
</abstract>
<kwd-group>
<kwd>pulmonary nodules</kwd>
<kwd>lung cancer</kwd>
<kwd>malignancy level classification</kwd>
<kwd>deep learning</kwd>
<kwd>ensemble classification</kwd>
</kwd-group></article-meta>
</front>
<body>
<sec sec-type="intro">
<title>Introduction</title>
<p>Lung cancer is one of the most common types of malignant tumor globally. According to cancer statistics, 2,093,000 new lung cancer cases were reported in 2018 (<xref rid="b1-ol-0-0-11576" ref-type="bibr">1</xref>), accounting for 12.22&#x0025; of total cases cancer worldwide. A total of 1,761,000 patients died due to lung cancer in 2018, which represents 19.78&#x0025; of total mortality due to cancer in the same period (<xref rid="b1-ol-0-0-11576" ref-type="bibr">1</xref>). In general, &#x003E;80&#x0025; of cases of lung cancer are attributed to non-small cell lung cancer (<xref rid="b2-ol-0-0-11576" ref-type="bibr">2</xref>). Early-stage non-small cell cancer usually occurs in the form of pulmonary nodules, which are not readily detected, and are often asymptomatic prior to excessive proliferation. This leads to frequent misdiagnoses. Lung database computed tomography (LDCT) is an effective screening method for pulmonary nodules due to its low radiation and price. Automatic detection and diagnosis of pulmonary nodules from chest CT images usually includes segmentation of the pulmonary parenchyma from the CT images, detection of suspected nodules in the parenchyma of the lung, extraction of the characteristics of pulmonary nodules and classification of the pulmonary nodules, which is the key step in supplying supplementary suggestions for diagnosis.</p>
<p>When making a diagnosis using medical imaging, physicians rate the characteristics (texture, margin, lobulation and calcification) of pulmonary nodules using empirical and subjective methods to determine their malignant phenotype (<xref rid="b3-ol-0-0-11576" ref-type="bibr">3</xref>&#x2013;<xref rid="b7-ol-0-0-11576" ref-type="bibr">7</xref>). This method is subjective and highly dependent on the physician&#x0027;s experience. Concomitantly, physical examination and imaging of lung nodules is becoming increasingly onerous and presents a major challenge for physicians, affecting the diagnostic classification accuracy (Acc) of lung nodules. The continuous development of machine learning has enabled the application of advanced learning techniques in the research and diagnosis of a number of diseases (<xref rid="b8-ol-0-0-11576" ref-type="bibr">8</xref>&#x2013;<xref rid="b17-ol-0-0-11576" ref-type="bibr">17</xref>). The information derived from lung nodule image data can be combined with machine learning in order to investigate the association between lung cancer incidence and clinicopathological features (<xref rid="b18-ol-0-0-11576" ref-type="bibr">18</xref>). Supervised machine learning uses the correspondence between data and labels to derive a mapping association between them, whereas unsupervised machine learning is used in cases where samples cannot be effectively classified, such as in the absence of sufficient prior labels. The automated rating of lung nodules using machine learning can improve the efficiency of inspections while reducing human error (<xref rid="b19-ol-0-0-11576" ref-type="bibr">19</xref>).</p>
<p>The present study proposed a method based on ensemble learning designed to classify the malignant levels of pulmonary nodules, using features such as morphological texture features (TF) and deep semantic features. These approaches are more suitable than previous computer-aided diagnosis (<xref rid="b9-ol-0-0-11576" ref-type="bibr">9</xref>&#x2013;<xref rid="b18-ol-0-0-11576" ref-type="bibr">18</xref>) for clinical practice and replace a single identification method that can only distinguish between benign and malignant nodule states. As the precision of a single classifier (<xref rid="b9-ol-0-0-11576" ref-type="bibr">9</xref>&#x2013;<xref rid="b13-ol-0-0-11576" ref-type="bibr">13</xref>) is not high and does not meet the clinical diagnosis requirements, the present study used the ensemble learning method to integrate three single classifiers according to certain strategies, namely comprehensive analysis of the characterization information of lung nodules and automatic assignment of the lung nodule malignancy, in order to improve the Acc. The training and testing protocols used in the present study included datasets from the Lung Image Database Consortium Image Database Resource Initiative (LIDC-IDRI). Radiologists assigned the corresponding features of pulmonary nodule lesions according to the image files of each study example (<xref rid="b20-ol-0-0-11576" ref-type="bibr">20</xref>,<xref rid="b21-ol-0-0-11576" ref-type="bibr">21</xref>).</p>
<p>The results of the present study demonstrated that the specific characteristics of CT images and pulmonary nodules can be used to quantitatively evaluate lung nodule features based on weighted voting, which differs from the previous classification of the benign and malignant pulmonary nodule algorithm. This process automatically scores the malignant phenotype levels of the lung nodules. In addition, the computer tomographic image features and the different semantic features are matched. The correspondence between different modalities can be used to design a more precise personalized treatment plan. Furthermore, a scheme is proposed for ensemble learning of different classifiers by training multiple classifiers, combining them according to the determined integration strategy and comprehensively assessing the final result. The feasibility of the proposed method was demonstrated in the LIDC-IDRI dataset and was compared with state-of-the-art methods for pulmonary nodule diagnosis and assessment.</p>
</sec>
<sec sec-type="materials|methods">
<title>Materials and methods</title>
<sec>
<title/>
<sec>
<title/>
<p>The materials and methods section describes the proposed method for classification of pulmonary nodules using ensemble learning. Characteristics of pulmonary nodules were extracted using techniques such as the Convolutional Neural Networks (CNN) features of supervised machine learning methods, the Denoising Auto Encoder (DAE) features of unsupervised machine learning methods and the Texture Feature (TF) and Shape Feature (SF) of lung nodules. CNN, autoencoder and TF and SF techniques were combined with the weighted voting model to predict the semantic feature scores of the lung nodules. According to the classification error rate of the sub-classifier, the weights of different classifiers were determined, and the integrated model lung nodule classification was obtained to determine malignant phenotype level (<xref rid="f1-ol-0-0-11576" ref-type="fig">Fig. 1</xref>).</p>
</sec>
<sec>
<title>Lung nodule data</title>
<p>The lung CT datasets selected in the present study were obtained from LIDC-IDRI (<xref rid="b22-ol-0-0-11576" ref-type="bibr">22</xref>) (<uri xlink:href="https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI">https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI</uri>). The LIDC-IDRI dataset contained a total of 1,018 CT images of patients with relevant clinical information. These CT images were marked by four physicians to indicate the location of the lung nodules, the edge contour information, the degree of benign and malignant characteristics and the quantitation of different signs. Pulmonary nodules with different malignant phenotypes exhibited a number of morphological characteristics. In the LIDC-IDRI dataset, the malignant phenotype of the lung nodules was quantified by the physician using specific numbers and the quantification range was set to 1&#x2013;5, according to the definition of dataset (<xref rid="b14-ol-0-0-11576" ref-type="bibr">14</xref>,<xref rid="b16-ol-0-0-11576" ref-type="bibr">16</xref>). The probability of malignancy was indicated as follows: i) Malignancy 1, high probability of being benign; ii) malignancy 2, moderate probability of being benign; iii) malignancy 3, indeterminate probability being benign; iv) malignancy 4, moderate probability of being malignant; and v) malignancy 5, high probability of being malignant.</p>
<p>As nodules in the lung parenchyma are generally small in diameter, the remaining parts of the CT image may affect the classification results; therefore, the lung nodule images were extracted according to the required annotations. The annotation file records comprised the edge information used by the doctor to mark the position of the nodule. Based on the edge information, the center position of the nodule was determined and a 64&#x00D7;64 pixel<sup>2</sup> image located at the center of the lung nodule was obtained as experimental data. The computer-aided diagnostic system automatically extracts the characteristics of the pulmonary nodules and assesses the malignant phenotype of the nodules, which improves the prediction efficiency. In the present study, the LIDC-IDRI dataset was used for model training. Initially, the 64&#x00D7;64 pixel<sup>2</sup> regions of interest (ROI) containing the pulmonary nodules were extracted according to the annotation file. The extracted ROI images of the pulmonary nodules were used as the input of the three models, and the corresponding pulmonary malignant phenotype of the nodule was extracted.</p>
</sec>
<sec>
<title>Pulmonary nodule feature extraction using unsupervised learning</title>
<p>The autoencoder (<xref rid="b23-ol-0-0-11576" ref-type="bibr">23</xref>) is an unsupervised learning method that automatically maps input data into the hidden layers and reconstructs the output of the hidden layers to the same shape as the raw input data. It locates hidden features from specific inputs and extracts them to represent the original input. The process from the input to the hidden layers is known as encoding, whereas the process of reconstruction from the hidden layers is known as decoding. The difference between the raw and the reconstruction input data is the reconstruction error. The autoencoder assumes that the distribution representation of the hidden layers can capture the main factors of change within the data.</p>
<p>Following lung nodule extraction using the DAE, the Softmax function was used to classify the nodular malignancy, assuming the training sample {(<italic>x<sub>1</sub>,y<sub>1</sub></italic>), (<italic>x<sub>2</sub>,y<sub>2</sub></italic>), &#x2026;, (<italic>x<sub>n</sub>,y<sub>n</sub></italic>)}, where <italic>x<sub>i</sub></italic> is the lung nodule image data, and <italic>y<sub>i</sub></italic> is the corresponding malignancy score. The classification of a lung nodule by the Softmax function requires estimation of the probability corresponding to each malignancy score. The formula used to calculate the probability was:</p>
<disp-formula>
<label>(1)</label>
<alternatives>
<mml:math id="umml1" display="block"><mml:mrow><mml:mi mathvariant="normal">P</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="normal">y</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi mathvariant="normal">j</mml:mi><mml:mo stretchy="false">|</mml:mo><mml:msub><mml:mi mathvariant="normal">x</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub><mml:mo>;</mml:mo><mml:mi mathvariant="normal">j</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi mathvariant="normal">j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mn>5</mml:mn></mml:msubsup><mml:mrow><mml:msup><mml:mi mathvariant="normal">e</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x03C9;</mml:mi><mml:msub><mml:mi mathvariant="normal">x</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:mfrac><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msup><mml:mi mathvariant="normal">e</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x03C9;</mml:mi><mml:msub><mml:mi mathvariant="normal">x</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi mathvariant="normal">j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:math>
<graphic xlink:href="ol-20-01-0401-g00.tif"/>
</alternatives>
</disp-formula>
<p>where &#x03C9; is a parameter used in the model. The degree of malignant phenotype corresponding to the probability value was selected as the predicted malignancy of the lung nodule image x<sub>i</sub>. The process and feature extraction by Denoising Autoencoder (<xref rid="SD1-ol-0-0-11576" ref-type="supplementary-material">Fig. S1</xref>) are further described in Data S1.</p>
</sec>
<sec>
<title>Pulmonary nodule features extraction using supervised learning</title>
<p>CNN is a feedforward deep neural network, which consists of convolution operation. CNN calculates matching levels between the images and labels by extracting the feature representation of images (<xref rid="b24-ol-0-0-11576" ref-type="bibr">24</xref>&#x2013;<xref rid="b27-ol-0-0-11576" ref-type="bibr">27</xref>). The deeper of network layers is, the stronger CNN representation is; however, it has been shown that the network degenerates as CNN increase in depth, and this increase results in a decrease in Acc. ResNet network is another typical CNN that adds a shortcut connection and an identity map to the network using residual learning (<xref rid="b27-ol-0-0-11576" ref-type="bibr">27</xref>). The extraction capability of network features is enhanced, and the network performance gradually improves as the network deepens (<xref rid="b25-ol-0-0-11576" ref-type="bibr">25</xref>).</p>
<p>In the present study, the sum of the cross-entropy loss function and the regularization loss function were used as the loss function of the residual network:</p>
<disp-formula>
<label>(2)</label>
<alternatives>
<mml:math id="umml2" display="block"><mml:mrow><mml:msub><mml:mi mathvariant="normal">l</mml:mi><mml:mi mathvariant="normal">R</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi mathvariant="normal">i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi mathvariant="normal">n</mml:mi></mml:msubsup><mml:mrow><mml:msub><mml:mi mathvariant="normal">y</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub><mml:mo>log</mml:mo><mml:msub><mml:mover accent="true"><mml:mi mathvariant="normal">y</mml:mi><mml:mo>&#x02C6;</mml:mo></mml:mover><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msub><mml:mi mathvariant="normal">y</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>log</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msub><mml:mover accent="true"><mml:mi mathvariant="normal">y</mml:mi><mml:mo>&#x02C6;</mml:mo></mml:mover><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi mathvariant="normal">n</mml:mi></mml:mfrac><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="normal">p</mml:mi><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi mathvariant="normal">n</mml:mi></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mi mathvariant="normal">&#x03B8;</mml:mi><mml:mi mathvariant="normal">i</mml:mi></mml:msub><mml:mo>|</mml:mo><mml:mo>+</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi mathvariant="normal">p</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi mathvariant="normal">i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi mathvariant="normal">n</mml:mi></mml:msubsup><mml:mrow><mml:msubsup><mml:mi mathvariant="normal">&#x03B8;</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>
<graphic xlink:href="ol-20-01-0401-g01.tif"/>
</alternatives>
</disp-formula>
<p>where y<sub>i</sub> is the true label of the pulmonary nodule, <italic>&#x0177;<sub>i</sub></italic> is the prediction label for the pulmonary nodule, p is the regularization factor and <italic>&#x03B8;</italic>, the network model parameter. The features extracted using ResNet-18 were also classified using the Softmax function. The process of feature extraction by ResNet-18 (<xref rid="SD1-ol-0-0-11576" ref-type="supplementary-material">Fig. S2</xref>) and the hyper-parameters of ResNet-18 (<xref rid="SD1-ol-0-0-11576" ref-type="supplementary-material">Table SI</xref>) are further described in Data S2.</p>
</sec>
<sec>
<title>Pulmonary nodule classification by handcrafted features</title>
<p>The handcrafted features of the pulmonary nodules were used to classify the lung nodules. Due to the particularity of the medical images, only SF and TF were used to classify the lung nodules (<xref rid="b28-ol-0-0-11576" ref-type="bibr">28</xref>,<xref rid="b29-ol-0-0-11576" ref-type="bibr">29</xref>). The geometric parameter method was used to determine the shape of the lung nodules, and the Gray Level Co-occurrence Matrix was used to determine the texture of the nodules.</p>
<p>Following extraction of TF and SF, the extracted features were concatenated using a set of feature vectors. The multi-class machine learning method was selected to classify the features of the lung nodules. In the present study, the K-Nearest Neighbor (KNN) method was selected to classify the extracted handcrafted features. KNN is a method for classifying targets based on feature space training examples, and consists of two components, learning and classification. In the present study, five grades were used for the classification of the malignant phenotype and an additional five categories were employed as vectors. All candidate feature vectors were classified by KNN and divided into five categories, representing the five malignancy grades. The process and some typical handcrafted features (<xref rid="SD1-ol-0-0-11576" ref-type="supplementary-material">Fig. S3</xref>) are further described in Data S3</p>
</sec>
<sec>
<title>Weighted voting method based on classification error rate</title>
<p>The three feature methods of unsupervised learning, supervised learning and handcrafted feature combination exhibited different classification abilities for the classification task, and different classification performances. If a single classifier is used alone, the generalization ability of the classifier may not be strong. Three classifiers were combined by certain rules and the combined model could make full use of the features extracted by the three methods. This approach may improve the Acc and generalization ability of the model and could decrease the risk of the model leading to local minimum points in the learning task during the training process. Fusion of the multi-classifiers resulted in cascade and parallel forms. The parallel mode adjusts the base classifiers into a parallel action. Therefore, in the present study, the three classifiers were combined in parallel.</p>
<p>Using parallel fusion, weighted voting was based on the error rate (<xref rid="b30-ol-0-0-11576" ref-type="bibr">30</xref>&#x2013;<xref rid="b32-ol-0-0-11576" ref-type="bibr">32</xref>). The ensemble classification model can utilize the features of each classifier and further ensure flexibility between the different feature coefficients of each classifier (<xref rid="tI-ol-0-0-11576" ref-type="table">Table I</xref>).</p>
</sec>
<sec>
<title>Evaluation criteria</title>
<p>In the present study, models were assessed based on accuracy (Acc), precision (Pre) and sensitivity (Sen). Acc is the correct proportion of the total sample, indicating the classification ability of the models. Pre is the positive predictive value, representing the proportion of true positives in the positive samples. Sen is the true positive rate, which is the proportion required to make a true positive prediction. Larger values indicate a better performance of classification. The calculation formula used were as follows:</p>
<disp-formula>
<label>(3)</label>
<alternatives>
<mml:math id="umml3" display="block"><mml:mrow><mml:mi>A</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mn>5</mml:mn></mml:msubsup><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>/</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:msub><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>
<graphic xlink:href="ol-20-01-0401-g02.tif"/>
</alternatives>
</disp-formula>
<disp-formula>
<label>(4)</label>
<alternatives>
<mml:math id="umml4" display="block"><mml:mrow><mml:mi mathvariant="italic">Pre</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mn>5</mml:mn></mml:msubsup><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>/</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>
<graphic xlink:href="ol-20-01-0401-g03.tif"/>
</alternatives>
</disp-formula>
<disp-formula>
<label>(5)</label>
<alternatives>
<mml:math id="umml5" display="block"><mml:mrow><mml:mi mathvariant="italic">Sen</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mn>5</mml:mn></mml:msubsup><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>/</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:msub><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>
<graphic xlink:href="ol-20-01-0401-g04.tif"/>
</alternatives>
</disp-formula>
<p>The pulmonary nodule classification was defined by the following parameters: TP<sub>i</sub> (true positive) indicated the probability that the malignancy i was classified as I; FN<sub>i</sub> (false negative) indicated the probability that the malignancy i was not classified as I; FP<sub>i</sub> (false positive) indicated the probability that a malignancy that was not i was not classified as I; and TN<sub>i</sub> (true negative) indicated the probability that a malignancy that was not i was classified as i (i=1, 2, 3, 4 and 5). The detailed data description and some nodule samples (<xref rid="SD1-ol-0-0-11576" ref-type="supplementary-material">Fig. S4</xref>) are shown in Data S4. In order to determine sensitivity and accuracy trends, receiver operating characteristic (ROC) curves and area under the curve (AUC) values were generated (<xref rid="f2-ol-0-0-11576" ref-type="fig">Fig. 2</xref>). The ROC curves comprehensively demonstrate the association between precision and sensitivity, while the AUC value is the area under ROC curves. The larger the AUC value, the better the classifier performance (<xref rid="b32-ol-0-0-11576" ref-type="bibr">32</xref>).</p>
</sec>
</sec>
</sec>
<sec sec-type="results">
<title>Results</title>
<sec>
<title/>
<sec>
<title>Classification results of different classifiers</title>
<p>The proposed method exhibited an average Acc of 93.10&#x0025;, a Pre of 83.85&#x0025; and a Sen of 81.75&#x0025; for identification of the malignant phenotype of the lung nodules in the test set (<xref rid="tII-ol-0-0-11576" ref-type="table">Table II</xref>).</p>
<p>During the training phase, 10-fold cross-validation was used to obtain the Acc of the three classifiers. ResNet-18 exhibited high Acc, whereas DAE exhibited notably stable Acc (<xref rid="f3-ol-0-0-11576" ref-type="fig">Fig. 3</xref>). Although the Acc of the handcrafted features was relatively low, it could describe the specific morphological and TF of the pulmonary nodules. In order to validate these findings, an ablation experiment was performed by removing the single methods (<xref rid="tIII-ol-0-0-11576" ref-type="table">Table III</xref>).</p>
</sec>
<sec>
<title>Comparative experiment</title>
<p>The present study further compared the performance of the proposed method based on the weighted voting classification method and similar classification methods (<xref rid="b13-ol-0-0-11576" ref-type="bibr">13</xref>,<xref rid="b33-ol-0-0-11576" ref-type="bibr">33</xref>&#x2013;<xref rid="b36-ol-0-0-11576" ref-type="bibr">36</xref>) used under the same conditions (<xref rid="tIV-ol-0-0-11576" ref-type="table">Table IV</xref>).</p>
<p>The results indicated that the Acc, Sen and AUC of the classification of the malignant phenotype of the pulmonary nodules were optimal in the methods used in the present study. The proposed method exhibited a higher classification performance regarding the pulmonary nodules, which could be used for their accurate assessment, thereby supplying an auxiliary suggestion for the judgment of the medical practitioner. The Pre of the method used in the present study was lower compared with the Multi-Crop CNN (MC-CNN) method proposed by Shen <italic>et al</italic> (<xref rid="b34-ol-0-0-11576" ref-type="bibr">34</xref>), as MC-CNN captures more prominent features of the nodule via multiple cropping strategies. However, MC-CNN reduces Sen to ensure Pre and does not improve the classification Acc of pulmonary nodules. Therefore, complex convolution networks may result in longer time periods.</p>
</sec>
<sec>
<title>Different CNN models</title>
<p>In the present study, a number of common supervised CNN models were selected for comparison, namely, GoogleNet, VGGNet and SENet (<xref rid="b24-ol-0-0-11576" ref-type="bibr">24</xref>&#x2013;<xref rid="b26-ol-0-0-11576" ref-type="bibr">26</xref>). The CNN models were compared using different pre-training processes with the ResNet-18 under the same conditions (<xref rid="tV-ol-0-0-11576" ref-type="table">Table V</xref>).</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion">
<title>Discussion</title>
<p>Magnetic Resonance Imaging (MRI) uses a magnetic field to obtain electromagnetic signals from the body and reconstruct these signals into images. As lung tissue is rich in gas, the effectiveness of lung MRI is poor. Positron Emission CT (PET-CT) uses the Compton effect in order to reconstruct images. However, its use of radiation increases the risk of lung cancer. CT uses precise and collimated X-ray beams to scan the body and conduct tomography. CT is suitable for screening human respiratory diseases due to its high-density resolution (<xref rid="b37-ol-0-0-11576" ref-type="bibr">37</xref>). The radiation dose of LDCT is only 26&#x0025; of that of conventional CT, and LDCT can therefore decrease the incidence of side effects, compared with MRI and PET-CT (<xref rid="b38-ol-0-0-11576" ref-type="bibr">38</xref>). It is therefore suitable for screening patients with lung cancer, especially non-small cell lung cancer.</p>
<p>For each characteristic of the pulmonary nodules, the appearance was different. Direct observation of the location of the pulmonary nodules or analysis of their malignant phenotype from the image is considered a difficult task. The key features of the vectors were extracted, in order to represent nodules for classification; however, the characteristics of the nodules varied, thus making the task difficult. With regards to nodule images, the supervised learning approaches can automatically extract different features of nodules according to the nodule&#x0027;s labels (benign and malignant). The advantage is that it can identify different categories of nodules according to given labels, without manual intervention. On the other hand, DAE also can extract effective features of nodules through back-propagation algorithm and gradient descent algorithm (<xref rid="b23-ol-0-0-11576" ref-type="bibr">23</xref>). The autoencoder can find the specific latent vectors from sample sets and extract it for classification. DAE have the ability to preserve the local and global structure of highly nonlinear networks, thus it can be better applied to nodules classification tasks. The handcrafted features, TF and SF reflect the information of the surface and appearance of the nodules, respectively; however, they are unable to completely reflect the essential attributes of the nodules in classification alone, thus these features need to be used in combination (<xref rid="b39-ol-0-0-11576" ref-type="bibr">39</xref>).</p>
<p>The present study proposed a method for classifying the malignant phenotype of pulmonary nodules on chest CT images. Initially, ResNet-18 and DAE were used to classify lung nodules and KNN was used to classify the SF and TF of these nodules. To get better classification results, we ensemble single classifier according to the classification error rate of the three classifiers, the ensemble model was integrated with three classifiers using weighted voting. A total of 4,578 lung nodule images were extracted from the LIDC-IDRI dataset to verify the validity of the method. Following data balancing and data augmentation, data were obtained from 20,000 images. In the final model, Acc, Pre and Sen reached 93.10, 83.85 and 81.75&#x0025;, respectively. The overall performance was higher than that of state-of-the-art methods (<xref rid="b13-ol-0-0-11576" ref-type="bibr">13</xref>,<xref rid="b29-ol-0-0-11576" ref-type="bibr">29</xref>&#x2013;<xref rid="b32-ol-0-0-11576" ref-type="bibr">32</xref>). In the present study, these data were compared with the different CNN models and ResNet-18. It was demonstrated that the classification performance of ResNet-18 was higher than that of the other CNN models. Therefore, the proposed classification method for the malignant phenotype of pulmonary nodules decreases the time and cost of CT imaging, increases the Acc of assisted lung cancer diagnosis, offers auxiliary support during diagnosis and improves the efficiency of lung cancer screening in hospitals.</p>
<p>Lung cancer automatic judgment is important but difficult as it predominantly includes detection, segmentation and evaluation (<xref rid="b40-ol-0-0-11576" ref-type="bibr">40</xref>). The present study successfully identified the classification of multi-class nodules, which is the first step of lung cancer judgment. Prospective studies will focus on lung tumor prediction and segmentation.</p>
</sec>
<sec sec-type="supplementary-material">
<title>Supplementary Material</title>
<supplementary-material id="SD1-ol-0-0-11576" content-type="local-data">
<caption>
<title>Supporting Data</title>
</caption>
<media mimetype="application" mime-subtype="pdf" xlink:href="Supplementary_Data.pdf"/>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>Not applicable.</p>
</ack>
<sec>
<title>Funding</title>
<p>The present study was supported in part by the National Natural Science Foundation of China (grant no. 61872261).</p>
</sec>
<sec>
<title>Availability of data and materials</title>
<p>The datasets generated and/or analyzed during the current study are available in the LIDC-IDRI repository (<uri xlink:href="https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI">https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI</uri>). The implementation code used during the current study is available at <uri xlink:href="https://github.com/XmaNm/nodules-classification">https://github.com/XmaNm/nodules-classification</uri>.</p>
</sec>
<sec>
<title>Authors&#x0027; contributions</title>
<p>NX and YQ conceived and designed the study. MBZ improved the algorithm for use in the present study. JHL collected and curated data. NX and JHL designed the experiment and analyzed the results. SHW coordinated the present study and collected background information. All authors read and approved the final manuscript.</p>
</sec>
<sec>
<title>Ethics approval and consent for publication</title>
<p>Not applicable.</p>
</sec>
<sec>
<title>Patient consent for publication</title>
<p>Not applicable.</p>
</sec>
<sec>
<title>Competing interests</title>
<p>The authors declare that they have no competing interests.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="b1-ol-0-0-11576"><label>1</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bray</surname><given-names>F</given-names></name><name><surname>Ferlay</surname><given-names>J</given-names></name><name><surname>Soerjomataram</surname><given-names>I</given-names></name><name><surname>Siegel</surname><given-names>RL</given-names></name><name><surname>Torre</surname><given-names>LA</given-names></name><name><surname>Jemal</surname><given-names>A</given-names></name></person-group><article-title>Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries</article-title><source>CA Cancer J Clin</source><volume>68</volume><fpage>394</fpage><lpage>424</lpage><year>2018</year><pub-id pub-id-type="doi">10.3322/caac.21492</pub-id><pub-id pub-id-type="pmid">30207593</pub-id></element-citation></ref>
<ref id="b2-ol-0-0-11576"><label>2</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rzechonek</surname><given-names>A</given-names></name><name><surname>Grzegrzolka</surname><given-names>J</given-names></name><name><surname>Blasiak</surname><given-names>P</given-names></name><name><surname>Ornat</surname><given-names>M</given-names></name><name><surname>Piotrowska</surname><given-names>A</given-names></name><name><surname>Nowak</surname><given-names>A</given-names></name><name><surname>Dziegiel</surname><given-names>P</given-names></name></person-group><article-title>Correlation of expression of tenascin C and blood vessel density in non-small cell lung cancers</article-title><source>Anticancer Res</source><volume>38</volume><fpage>1987</fpage><lpage>1991</lpage><year>2018</year><pub-id pub-id-type="pmid">29599314</pub-id></element-citation></ref>
<ref id="b3-ol-0-0-11576"><label>3</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>S</given-names></name><name><surname>Harmon</surname><given-names>S</given-names></name><name><surname>Perk</surname><given-names>T</given-names></name><name><surname>Li</surname><given-names>X</given-names></name><name><surname>Chen</surname><given-names>M</given-names></name><name><surname>Li</surname><given-names>Y</given-names></name><name><surname>Jeraj</surname><given-names>R</given-names></name></person-group><article-title>Diagnostic classification of solitary pulmonary nodules using dual time <sup>18</sup>F-FDG PET/CT image texture features in granuloma-endemic regions</article-title><source>Sci Rep</source><volume>7</volume><fpage>9370</fpage><year>2017</year><pub-id pub-id-type="doi">10.1038/s41598-017-08764-7</pub-id><pub-id pub-id-type="pmid">28839156</pub-id></element-citation></ref>
<ref id="b4-ol-0-0-11576"><label>4</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dai</surname><given-names>M</given-names></name><name><surname>Qi</surname><given-names>J</given-names></name><name><surname>Zhou</surname><given-names>Z</given-names></name><name><surname>Gao</surname><given-names>F</given-names></name></person-group><article-title>The classification of pulmonary nodules based on texture features over local jet transformation space</article-title><source>Chin J Biomed Eng</source><volume>36</volume><fpage>12</fpage><lpage>19</lpage><year>2017</year></element-citation></ref>
<ref id="b5-ol-0-0-11576"><label>5</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Felix</surname><given-names>A</given-names></name><name><surname>Oliveira</surname><given-names>M</given-names></name><name><surname>Machado</surname><given-names>A</given-names></name><name><surname>Raniery</surname><given-names>J</given-names></name></person-group><article-title>Using 3D texture and margin sharpness features on classification of small pulmonary nodules</article-title><publisher-name>In: Proceedings of 29th Conference on Graphics</publisher-name><publisher-loc>Patterns and Images (SIBGRAPI), Sao Paulo</publisher-loc><fpage>394</fpage><lpage>400</lpage><year>2016</year></element-citation></ref>
<ref id="b6-ol-0-0-11576"><label>6</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Song</surname><given-names>J</given-names></name><name><surname>Hui</surname><given-names>L</given-names></name><name><surname>Geng</surname><given-names>F</given-names></name><name><surname>Zhang</surname><given-names>C</given-names></name></person-group><article-title>Weakly-supervised classification of pulmonary nodules based on shape characters</article-title><publisher-name>In: Proceedings of 2016 IEEE 14th Intl Conf on Dependable, Autonomic and Secure Computing, 14th Intl Conf on Pervasive Intelligence and Computing, 2nd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech)</publisher-name><publisher-loc>Auckland</publisher-loc><fpage>228</fpage><lpage>232</lpage><year>2016</year></element-citation></ref>
<ref id="b7-ol-0-0-11576"><label>7</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Niehaus</surname><given-names>R</given-names></name><name><surname>Raicu</surname><given-names>DS</given-names></name><name><surname>Furst</surname><given-names>J</given-names></name><name><surname>Armato</surname><given-names>S</given-names><suffix>III</suffix></name></person-group><article-title>Toward understanding the size dependence of shape features for predicting spiculation in lung nodules for computer-aided diagnosis</article-title><source>J Digit Imaging</source><volume>28</volume><fpage>704</fpage><lpage>717</lpage><year>2015</year><pub-id pub-id-type="doi">10.1007/s10278-015-9774-8</pub-id><pub-id pub-id-type="pmid">25708891</pub-id></element-citation></ref>
<ref id="b8-ol-0-0-11576"><label>8</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dhara</surname><given-names>AK</given-names></name><name><surname>Mukhopadhyay</surname><given-names>S</given-names></name><name><surname>Dutta</surname><given-names>A</given-names></name><name><surname>Garg</surname><given-names>M</given-names></name><name><surname>Khandelwal</surname><given-names>N</given-names></name></person-group><article-title>A Combination of shape and texture features for classification of pulmonary nodules in lung CT images</article-title><source>J Digit Imaging</source><volume>29</volume><fpage>466</fpage><lpage>475</lpage><year>2016</year><pub-id pub-id-type="doi">10.1007/s10278-015-9857-6</pub-id><pub-id pub-id-type="pmid">26738871</pub-id></element-citation></ref>
<ref id="b9-ol-0-0-11576"><label>9</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>W</given-names></name><name><surname>Cao</surname><given-names>P</given-names></name><name><surname>Zhao</surname><given-names>D</given-names></name><name><surname>Wang</surname><given-names>J</given-names></name></person-group><article-title>Pulmonary nodule classification with deep convolutional neural networks on computed tomography images</article-title><source>Comput Math Methods Med</source><volume>2016</volume><fpage>6215085</fpage><year>2016</year><pub-id pub-id-type="doi">10.1155/2016/6215085</pub-id><pub-id pub-id-type="pmid">28070212</pub-id></element-citation></ref>
<ref id="b10-ol-0-0-11576"><label>10</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tartar</surname><given-names>A</given-names></name><name><surname>Akan</surname><given-names>A</given-names></name><name><surname>Kilic</surname><given-names>N</given-names></name></person-group><article-title>A novel approach to malignant-benign classification of pulmonary nodules by using ensemble learning classifiers</article-title><source>Conf Proc IEEE Eng Med Biol Soc</source><volume>2014</volume><fpage>4651</fpage><lpage>4654</lpage><year>2014</year><pub-id pub-id-type="pmid">25571029</pub-id></element-citation></ref>
<ref id="b11-ol-0-0-11576"><label>11</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nibali</surname><given-names>A</given-names></name><name><surname>Zhen</surname><given-names>H</given-names></name><name><surname>Wollersheim</surname><given-names>D</given-names></name></person-group><article-title>Pulmonary nodule classification with deep residual networks</article-title><source>Int J Comput Assist Radiol Surg</source><volume>12</volume><fpage>1799</fpage><lpage>1808</lpage><year>2017</year><pub-id pub-id-type="doi">10.1007/s11548-017-1605-6</pub-id><pub-id pub-id-type="pmid">28501942</pub-id></element-citation></ref>
<ref id="b12-ol-0-0-11576"><label>12</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname><given-names>W</given-names></name><name><surname>Zhou</surname><given-names>M</given-names></name><name><surname>Yang</surname><given-names>F</given-names></name><name><surname>Yang</surname><given-names>C</given-names></name><name><surname>Tian</surname><given-names>J</given-names></name></person-group><article-title>Multi-scale convolutional neural networks for lung nodule classification</article-title><source>Inf Process Med Imaging</source><volume>24</volume><fpage>588</fpage><lpage>599</lpage><year>2015</year><pub-id pub-id-type="pmid">26221705</pub-id></element-citation></ref>
<ref id="b13-ol-0-0-11576"><label>13</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname><given-names>D</given-names></name><name><surname>Wong</surname><given-names>A</given-names></name><name><surname>Clausi</surname><given-names>DA</given-names></name></person-group><article-title>Lung nodule classification using deep features in CT images</article-title><publisher-name>In: Proceedings of the 2015 12th Conference on Computer and Robot Vision</publisher-name><publisher-loc>Halifax, Canada. IEEE</publisher-loc><fpage>133</fpage><lpage>138</lpage><year>2015</year></element-citation></ref>
<ref id="b14-ol-0-0-11576"><label>14</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kaya</surname><given-names>A</given-names></name><name><surname>Can</surname><given-names>AB</given-names></name></person-group><article-title>A weighted rule based method for predicting malignancy of pulmonary nodules by nodule characteristics</article-title><source>J Biomed Inform</source><volume>56</volume><fpage>69</fpage><lpage>79</lpage><year>2015</year><pub-id pub-id-type="doi">10.1016/j.jbi.2015.05.011</pub-id><pub-id pub-id-type="pmid">26008877</pub-id></element-citation></ref>
<ref id="b15-ol-0-0-11576"><label>15</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>G</given-names></name><name><surname>Kim</surname><given-names>H</given-names></name><name><surname>Tan</surname><given-names>JK</given-names></name><name><surname>Ishikawa</surname><given-names>S</given-names></name><name><surname>Hirano</surname><given-names>Y</given-names></name><name><surname>Kido</surname><given-names>S</given-names></name><name><surname>Tachibana</surname><given-names>R</given-names></name></person-group><article-title>Semantic characteristics prediction of pulmonary nodule using artificial neural networks</article-title><source>Conf Proc IEEE Eng Med Biol Soc</source><volume>2013</volume><fpage>5465</fpage><lpage>5468</lpage><year>2013</year><pub-id pub-id-type="pmid">24110973</pub-id></element-citation></ref>
<ref id="b16-ol-0-0-11576"><label>16</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>S</given-names></name><name><surname>Ni</surname><given-names>D</given-names></name><name><surname>Qin</surname><given-names>J</given-names></name><name><surname>Lei</surname><given-names>B</given-names></name><name><surname>Wang</surname><given-names>T</given-names></name><name><surname>Cheng</surname><given-names>JZ</given-names></name></person-group><article-title>Bridging computational features toward multiple semantic features with multi-task regression: A study of ct pulmonary nodules. International Conference on Medical Image Computing and Computer-Assisted Intervention</article-title><publisher-name>Springer</publisher-name><publisher-loc>Cham</publisher-loc><fpage>53</fpage><lpage>60</lpage><year>2016</year></element-citation></ref>
<ref id="b17-ol-0-0-11576"><label>17</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shewaye</surname><given-names>TN</given-names></name><name><surname>Mekonnen</surname><given-names>AA</given-names></name></person-group><article-title>Benign-malignant lung nodule classification with geometric and appearance histogram features</article-title><publisher-name>arXiv: Computer Vision and Pattern Recognition</publisher-name><publisher-loc>arXiv:1605.08350v1 [cs.CV]</publisher-loc><year>2016</year></element-citation></ref>
<ref id="b18-ol-0-0-11576"><label>18</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Orozco</surname><given-names>HM</given-names></name><name><surname>Villegas</surname><given-names>OOV</given-names></name><name><surname>de Jes&#x00FA;s Ochoa Dom&#x00ED;nguez</surname><given-names>O</given-names></name><name><surname>S&#x00E1;nchez</surname><given-names>VGC</given-names></name></person-group><article-title>Lung nodule classification in CT thorax images using support vector machines</article-title><source>Mexican International Conference on Artificial Intelligence</source><publisher-name>IEEE</publisher-name><fpage>277</fpage><lpage>283</lpage><year>2014</year></element-citation></ref>
<ref id="b19-ol-0-0-11576"><label>19</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname><given-names>A</given-names></name><name><surname>Qi</surname><given-names>L</given-names></name><name><surname>Li</surname><given-names>J</given-names></name><name><surname>Dong</surname><given-names>J</given-names></name><name><surname>Yu</surname><given-names>H</given-names></name></person-group><article-title>LSTM for diagnosis of neurodegenerative diseases using gait data. In: Proceedings of the 9th International Conference on Graphics and Image Processing</article-title><publisher-name>SPIE Press</publisher-name><year>2018</year></element-citation></ref>
<ref id="b20-ol-0-0-11576"><label>20</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jacobs</surname><given-names>C</given-names></name><name><surname>van Rikxoort</surname><given-names>EM</given-names></name><name><surname>Twellmann</surname><given-names>T</given-names></name><name><surname>Scholten</surname><given-names>ET</given-names></name><name><surname>de Jong</surname><given-names>PA</given-names></name><name><surname>Kuhnigk</surname><given-names>JM</given-names></name><name><surname>Oudkerk</surname><given-names>M</given-names></name><name><surname>de Koning</surname><given-names>HJ</given-names></name><name><surname>Prokop</surname><given-names>M</given-names></name><name><surname>Schaefer-Prokop</surname><given-names>C</given-names></name><name><surname>van Ginneken</surname><given-names>B</given-names></name></person-group><article-title>Automatic detection of subsolid pulmonary nodules in thoracic computed tomography images</article-title><source>Med Image Anal</source><volume>18</volume><fpage>374</fpage><lpage>384</lpage><year>2014</year><pub-id pub-id-type="doi">10.1016/j.media.2013.12.001</pub-id><pub-id pub-id-type="pmid">24434166</pub-id></element-citation></ref>
<ref id="b21-ol-0-0-11576"><label>21</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname><given-names>J</given-names></name><name><surname>Wang</surname><given-names>Q</given-names></name><name><surname>Ren</surname><given-names>Y</given-names></name><name><surname>Hu</surname><given-names>H</given-names></name><name><surname>Zhao</surname><given-names>J</given-names></name></person-group><article-title>Automatic lung nodule classification with radiomics approach</article-title><source>Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations</source><volume>9789</volume><publisher-name>SPIE Proceedings</publisher-name><year>2016</year></element-citation></ref>
<ref id="b22-ol-0-0-11576"><label>22</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Armato</surname><given-names>SG</given-names><suffix>III</suffix></name><name><surname>McLennan</surname><given-names>G</given-names></name><name><surname>Bidaut</surname><given-names>L</given-names></name><name><surname>McNitt-Gray</surname><given-names>MF</given-names></name><name><surname>Meyer</surname><given-names>CR</given-names></name><name><surname>Reeves</surname><given-names>AP</given-names></name><name><surname>Zhao</surname><given-names>B</given-names></name><name><surname>Aberle</surname><given-names>DR</given-names></name><name><surname>Henschke</surname><given-names>CI</given-names></name><name><surname>Hoffman</surname><given-names>EA</given-names></name><etal/></person-group><article-title>The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans</article-title><source>Med Phys</source><volume>38</volume><fpage>915</fpage><lpage>931</lpage><year>2011</year><pub-id pub-id-type="doi">10.1118/1.3528204</pub-id><pub-id pub-id-type="pmid">21452728</pub-id></element-citation></ref>
<ref id="b23-ol-0-0-11576"><label>23</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>M</given-names></name><name><surname>Weinberger</surname><given-names>KQ</given-names></name><name><surname>Sha</surname><given-names>F</given-names></name><name><surname>Bengio</surname><given-names>YO</given-names></name></person-group><article-title>Marginalized denoising auto-encoders for nonlinear representations. Proceedings of the 31st International Conference on Machine Learning</article-title><source>PMLR</source><volume>32</volume><fpage>1476</fpage><lpage>1484</lpage><year>2014</year></element-citation></ref>
<ref id="b24-ol-0-0-11576"><label>24</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Szegedy</surname><given-names>C</given-names></name><name><surname>Liu</surname><given-names>W</given-names></name><name><surname>Jia</surname><given-names>Y</given-names></name><name><surname>Sermanet</surname><given-names>P</given-names></name><name><surname>Reed</surname><given-names>S</given-names></name><name><surname>Anguelov</surname><given-names>D</given-names></name><name><surname>Erhan</surname><given-names>D</given-names></name><name><surname>Vanhoucke</surname><given-names>V</given-names></name><name><surname>Rabinovich</surname><given-names>A</given-names></name></person-group><article-title>Going deeper with convolutions</article-title><publisher-name>arXiv: Computer Vision and Pattern Recognition</publisher-name><publisher-loc>arXiv:1409.4842v1 [cs.CV]</publisher-loc><year>2015</year><pub-id pub-id-type="doi">10.1109/CVPR.2015.7298594</pub-id></element-citation></ref>
<ref id="b25-ol-0-0-11576"><label>25</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Simonyan</surname><given-names>K</given-names></name><name><surname>Zisserman</surname><given-names>A</given-names></name></person-group><article-title>Very deep convolutional networks for large-scale image recognition</article-title><source>arXiv: Computer Vision and Pattern Recognition arXiv:1409.1556v6 [cs.CV]</source><year>2014</year></element-citation></ref>
<ref id="b26-ol-0-0-11576"><label>26</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname><given-names>J</given-names></name><name><surname>Shen</surname><given-names>L</given-names></name><name><surname>Albanie</surname><given-names>S</given-names></name><name><surname>Sun</surname><given-names>G</given-names></name><name><surname>Wu</surname><given-names>E</given-names></name></person-group><article-title>Squeeze-and-excitation networks</article-title><publisher-name>arXiv: Computer Vision and Pattern Recognition</publisher-name><publisher-loc>arXiv:1709.01507v4 [cs.CV]</publisher-loc><year>2017</year></element-citation></ref>
<ref id="b27-ol-0-0-11576"><label>27</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>He</surname><given-names>K</given-names></name><name><surname>Zhang</surname><given-names>X</given-names></name><name><surname>Ren</surname><given-names>S</given-names></name><name><surname>Sun</surname><given-names>J</given-names></name></person-group><article-title>Deep residual learning for image recognition</article-title><source>IEEE Conference on Computer Vision and Pattern Recognition</source><fpage>770</fpage><lpage>778</lpage><year>2016</year></element-citation></ref>
<ref id="b28-ol-0-0-11576"><label>28</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Haralick</surname><given-names>RM</given-names></name><name><surname>Shanmugam</surname><given-names>K</given-names></name><name><surname>Dinstein</surname><given-names>IH</given-names></name></person-group><article-title>Textural features for image classification</article-title><publisher-name>IEEE Transactions on Systems</publisher-name><publisher-loc>Man, and Cybernetics. Vol SMC-3. IEEE</publisher-loc><fpage>610</fpage><lpage>621</lpage><year>1973</year></element-citation></ref>
<ref id="b29-ol-0-0-11576"><label>29</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pan</surname><given-names>L</given-names></name><name><surname>Qiang</surname><given-names>Y</given-names></name><name><surname>Yuan</surname><given-names>J</given-names></name><name><surname>Wu</surname><given-names>L</given-names></name></person-group><article-title>Rapid retrieval of lung nodule CT images based on hashing and pruning methods</article-title><source>Biomed Res Int</source><volume>2016</volume><fpage>3162649</fpage><year>2016</year><pub-id pub-id-type="doi">10.1155/2016/3162649</pub-id><pub-id pub-id-type="pmid">27995140</pub-id></element-citation></ref>
<ref id="b30-ol-0-0-11576"><label>30</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>X</given-names></name><name><surname>Yang</surname><given-names>Y</given-names></name><name><surname>Xiong</surname><given-names>H</given-names></name><name><surname>Song</surname><given-names>S</given-names></name><name><surname>Jia</surname><given-names>H</given-names></name></person-group><article-title>Pulmonary nodules detection algorithm based on robust cascade classifier for CT images</article-title><source>Control and Decision Conference</source><publisher-name>IEEE</publisher-name><fpage>231</fpage><lpage>235</lpage><year>2017</year></element-citation></ref>
<ref id="b31-ol-0-0-11576"><label>31</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zinovev</surname><given-names>D</given-names></name><name><surname>Furst</surname><given-names>J</given-names></name><name><surname>Raicu</surname><given-names>D</given-names></name></person-group><article-title>Building an ensemble of probabilistic classifiers for lung nodule interpretation</article-title><source>Proceedings of the 10th International Conference on Machine Learning and Applications and Workshops</source><publisher-name>IEEE Computer Society</publisher-name><fpage>155</fpage><lpage>161</lpage><year>2011</year></element-citation></ref>
<ref id="b32-ol-0-0-11576"><label>32</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zou</surname><given-names>KH</given-names></name><name><surname>O&#x0027;Malley</surname><given-names>AJ</given-names></name><name><surname>Mauri</surname><given-names>L</given-names></name></person-group><article-title>Receiver-operating characteristic analysis for evaluating diagnostic tests and predictive models</article-title><source>Circulation</source><volume>115</volume><fpage>654</fpage><lpage>657</lpage><year>2007</year><pub-id pub-id-type="doi">10.1161/CIRCULATIONAHA.105.594929</pub-id><pub-id pub-id-type="pmid">17283280</pub-id></element-citation></ref>
<ref id="b33-ol-0-0-11576"><label>33</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zinovev</surname><given-names>D</given-names></name><name><surname>Feigenbaum</surname><given-names>J</given-names></name><name><surname>Furst</surname><given-names>J</given-names></name><name><surname>Raicu</surname><given-names>D</given-names></name></person-group><article-title>Probabilistic lung nodule classification with belief decision trees</article-title><source>Conf Proc IEEE Eng Med Biol Soc</source><volume>2011</volume><fpage>4493</fpage><lpage>4498</lpage><year>2011</year><pub-id pub-id-type="pmid">22255337</pub-id></element-citation></ref>
<ref id="b34-ol-0-0-11576"><label>34</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname><given-names>W</given-names></name><name><surname>Zhou</surname><given-names>M</given-names></name><name><surname>Yang</surname><given-names>F</given-names></name><name><surname>Yu</surname><given-names>D</given-names></name><name><surname>Dong</surname><given-names>D</given-names></name><name><surname>Yang</surname><given-names>C</given-names></name><name><surname>Zang</surname><given-names>Y</given-names></name><name><surname>Tian</surname><given-names>J</given-names></name></person-group><article-title>Multi-crop Convolutional Neural Networks for lung nodule malignancy suspiciousness classification</article-title><source>Pattern Recogn</source><volume>61</volume><fpage>663</fpage><lpage>673</lpage><year>2017</year><pub-id pub-id-type="doi">10.1016/j.patcog.2016.05.029</pub-id></element-citation></ref>
<ref id="b35-ol-0-0-11576"><label>35</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rodrigues</surname><given-names>MB</given-names></name><name><surname>Da N&#x00F3;Brega</surname><given-names>RVM</given-names></name><name><surname>Alves</surname><given-names>SSA</given-names></name><name><surname>Filho</surname><given-names>PPR</given-names></name><name><surname>Duarte</surname><given-names>JBF</given-names></name><name><surname>Sangaiah</surname><given-names>AK</given-names></name><name><surname>De Albuquerque</surname><given-names>VHC</given-names></name></person-group><article-title>Health of things algorithms for malignancy level classification of lung nodules</article-title><source>IEEE Access</source><volume>6</volume><fpage>18592</fpage><lpage>18601</lpage><year>2018</year><pub-id pub-id-type="doi">10.1109/ACCESS.2018.2817614</pub-id></element-citation></ref>
<ref id="b36-ol-0-0-11576"><label>36</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname><given-names>W</given-names></name><name><surname>Huang</surname><given-names>X</given-names></name><name><surname>Tseng</surname><given-names>TL</given-names></name><name><surname>Zhang</surname><given-names>J</given-names></name><name><surname>Qian</surname><given-names>W</given-names></name></person-group><article-title>Computerized lung cancer malignancy level analysis using 3D texture features</article-title><source>Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations</source><volume>9785</volume><publisher-name>SPIE Proceedings</publisher-name><year>2016</year></element-citation></ref>
<ref id="b37-ol-0-0-11576"><label>37</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Seo</surname><given-names>N</given-names></name><name><surname>Seok</surname><given-names>J</given-names></name><name><surname>Lim</surname><given-names>S</given-names></name><name><surname>Cho</surname><given-names>A</given-names></name></person-group><article-title>Radiologic diagnosis (CT, MRI, &#x0026; PET-CT)</article-title><source>Surg Gastric Cancer</source><fpage>67</fpage><lpage>86</lpage><year>2019</year><pub-id pub-id-type="doi">10.1007/978-3-662-45583-8_4</pub-id></element-citation></ref>
<ref id="b38-ol-0-0-11576"><label>38</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Oliva</surname><given-names>MR</given-names></name><name><surname>Saini</surname><given-names>S</given-names></name></person-group><article-title>Liver cancer imaging: Role of CT, MRI, US and PET</article-title><source>Cancer Imaging</source><volume>4</volume><fpage>S42</fpage><lpage>S46</lpage><year>2004</year><pub-id pub-id-type="doi">10.1102/1470-7330.2004.0011</pub-id><pub-id pub-id-type="pmid">18215974</pub-id></element-citation></ref>
<ref id="b39-ol-0-0-11576"><label>39</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Muhammad</surname><given-names>MN</given-names></name><name><surname>Raicu</surname><given-names>DS</given-names></name><name><surname>Furst</surname><given-names>JD</given-names></name><name><surname>Varutbangkul</surname><given-names>E</given-names></name></person-group><article-title>Texture versus shape analysis for lung nodule similarity in computed tomography studies</article-title><source>Medical Imaging 2008: PACS and Imaging Informatics</source><volume>6919</volume><publisher-name>SPIE Proceedings</publisher-name><year>2008</year></element-citation></ref>
<ref id="b40-ol-0-0-11576"><label>40</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wormanns</surname><given-names>D</given-names></name><name><surname>Fiebich</surname><given-names>M</given-names></name><name><surname>Saidi</surname><given-names>M</given-names></name><name><surname>Diederich</surname><given-names>S</given-names></name><name><surname>Heindel</surname><given-names>W</given-names></name></person-group><article-title>Automatic detection of pulmonary nodules at spiral CT: Clinical application of a computer-aided diagnosis system</article-title><source>Eur Radiol</source><volume>12</volume><fpage>1052</fpage><lpage>1057</lpage><year>2002</year><pub-id pub-id-type="doi">10.1007/s003300101126</pub-id><pub-id pub-id-type="pmid">11976846</pub-id></element-citation></ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-ol-0-0-11576" position="float">
<label>Figure 1.</label>
<caption><p>Flow chart of the proposed method used in the present study. LIDC, Lung Image Database Consortium.</p></caption>
<graphic xlink:href="ol-20-01-0401-g05.tif"/>
</fig>
<fig id="f2-ol-0-0-11576" position="float">
<label>Figure 2.</label>
<caption><p>Receiver Operating Characteristic curves of three basic classifiers. The horizontal axis indicates the false positive rate and the vertical axis indicates the true positive rate. The diagonal line represents the pure opportunity line, which is considered the reference line. The AUC values of ResNet-18, Handcraft feature and Denoising Auto Encoder were 0.85, 0.72 and 0.74, respectively, which was considered significant for nodules classification. AUC, area under the curve.</p></caption>
<graphic xlink:href="ol-20-01-0401-g06.tif"/>
</fig>
<fig id="f3-ol-0-0-11576" position="float">
<label>Figure 3.</label>
<caption><p>Classification performance of three basic classifiers. The box represents the range from Q1 value to Q3 value and the dotted line represents the lower limit and upper limit. ResNet had the highest accuracy and Denoising Auto Encoder had the highest precision within the three classifiers. The classifier with the Handcraft feature was more stable compared with the other two classifiers. Q1, lower quartile; Q3, upper quartile.</p></caption>
<graphic xlink:href="ol-20-01-0401-g07.tif"/>
</fig>
<table-wrap id="tI-ol-0-0-11576" position="float">
<label>Table I.</label>
<caption><p>Weighted voting algorithm based on classification error rate.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Algorithm 1 Weighted voting algorithm based on classification error rate</th>
</tr>
</thead>
<tbody>
<tr>
<td><graphic xlink:href="ol-20-01-0401-g08.tif"/></td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="tII-ol-0-0-11576" position="float">
<label>Table II.</label>
<caption><p>Classification results of different malignant pulmonary nodules from the Lung Image Database Consortium-Image Database Resource Initiative.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Malignancy level</th>
<th align="center" valign="bottom">Accuracy, &#x0025;</th>
<th align="center" valign="bottom">Precision, &#x0025;</th>
<th align="center" valign="bottom">Sensitivity, &#x0025;</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="center" valign="top">93.70</td>
<td align="center" valign="top">88.92</td>
<td align="center" valign="top">76.25</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="center" valign="top">92.70</td>
<td align="center" valign="top">72.11</td>
<td align="center" valign="top">82.37</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="center" valign="top">93.60</td>
<td align="center" valign="top">84.30</td>
<td align="center" valign="top">84.71</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="center" valign="top">92.22</td>
<td align="center" valign="top">81.42</td>
<td align="center" valign="top">81.82</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="center" valign="top">93.33</td>
<td align="center" valign="top">89.18</td>
<td align="center" valign="top">83.06</td>
</tr>
<tr>
<td align="left" valign="top">Totals</td>
<td align="center" valign="top">93.10</td>
<td align="center" valign="top">83.85</td>
<td align="center" valign="top">81.75</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="tIII-ol-0-0-11576" position="float">
<label>Table III.</label>
<caption><p>Experimental results of the pairwise method.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Method</th>
<th align="center" valign="bottom">Accuracy, &#x0025;</th>
<th align="center" valign="bottom">Precision, &#x0025;</th>
<th align="center" valign="bottom">Sensitivity, &#x0025;</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">ResNet &#x002B; Denoising Auto Encoder</td>
<td align="center" valign="top">80.27</td>
<td align="center" valign="top">82.10</td>
<td align="center" valign="top">73.45</td>
</tr>
<tr>
<td align="left" valign="top">ResNet &#x002B; KNN</td>
<td align="center" valign="top">79.87</td>
<td align="center" valign="top">69.73</td>
<td align="center" valign="top">77.95</td>
</tr>
<tr>
<td align="left" valign="top">Denoising Auto Encoder &#x002B; KNN</td>
<td align="center" valign="top">82.59</td>
<td align="center" valign="top">75.11</td>
<td align="center" valign="top">80.80</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn2-ol-0-0-11576"><p>KNN, K-nearest neighbor.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tIV-ol-0-0-11576" position="float">
<label>Table IV.</label>
<caption><p>Performance comparison of different pulmonary nodule classification methods.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Method</th>
<th align="center" valign="bottom">Accuracy, &#x0025;</th>
<th align="center" valign="bottom">Precision, &#x0025;</th>
<th align="center" valign="bottom">Sensitivity, &#x0025;</th>
<th align="center" valign="bottom">Area under curve</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Zinovev <italic>et al</italic> (<xref rid="b33-ol-0-0-11576" ref-type="bibr">33</xref>)</td>
<td align="center" valign="top">68.50</td>
<td align="center" valign="top">69.66</td>
<td align="center" valign="top">73.45</td>
<td align="center" valign="top">0.72</td>
</tr>
<tr>
<td align="left" valign="top">Shen <italic>et al</italic> (<xref rid="b34-ol-0-0-11576" ref-type="bibr">34</xref>)</td>
<td align="center" valign="top">82.12</td>
<td align="center" valign="top">84.10</td>
<td align="center" valign="top">78.65</td>
<td align="center" valign="top">0.78</td>
</tr>
<tr>
<td align="left" valign="top">Rodrigues <italic>et al</italic> (<xref rid="b35-ol-0-0-11576" ref-type="bibr">35</xref>)</td>
<td align="center" valign="top">73.45</td>
<td align="center" valign="top">75.20</td>
<td align="center" valign="top">79.20</td>
<td align="center" valign="top">0.75</td>
</tr>
<tr>
<td align="left" valign="top">Kumar <italic>et al</italic> (<xref rid="b13-ol-0-0-11576" ref-type="bibr">13</xref>)</td>
<td align="center" valign="top">71.30</td>
<td align="center" valign="top">69.73</td>
<td align="center" valign="top">77.95</td>
<td align="center" valign="top">0.74</td>
</tr>
<tr>
<td align="left" valign="top">Sun <italic>et al</italic> (<xref rid="b36-ol-0-0-11576" ref-type="bibr">36</xref>)</td>
<td align="center" valign="top">72.80</td>
<td align="center" valign="top">75.11</td>
<td align="center" valign="top">80.80</td>
<td align="center" valign="top">0.77</td>
</tr>
<tr>
<td align="left" valign="top">Proposed</td>
<td align="center" valign="top">93.10</td>
<td align="center" valign="top">83.85</td>
<td align="center" valign="top">81.75</td>
<td align="center" valign="top">0.82</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="tV-ol-0-0-11576" position="float">
<label>Table V.</label>
<caption><p>Comparison of classification performance for different CNN models.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Convolutional neural network model</th>
<th align="center" valign="bottom">Accuracy, &#x0025;</th>
<th align="center" valign="bottom">Precision, &#x0025;</th>
<th align="center" valign="bottom">Sensitivity, &#x0025;</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Plain-18</td>
<td align="center" valign="top">66.75</td>
<td align="center" valign="top">65.30</td>
<td align="center" valign="top">66.00</td>
</tr>
<tr>
<td align="left" valign="top">ResNet-18</td>
<td align="center" valign="top">87.15</td>
<td align="center" valign="top">84.10</td>
<td align="center" valign="top">85.65</td>
</tr>
<tr>
<td align="left" valign="top">ResNet-50</td>
<td align="center" valign="top">85.75</td>
<td align="center" valign="top">69.75</td>
<td align="center" valign="top">69.25</td>
</tr>
<tr>
<td align="left" valign="top">GoogleNet</td>
<td align="center" valign="top">86.20</td>
<td align="center" valign="top">79.00</td>
<td align="center" valign="top">79.50</td>
</tr>
<tr>
<td align="left" valign="top">VGGNet-16</td>
<td align="center" valign="top">86.30</td>
<td align="center" valign="top">85.25</td>
<td align="center" valign="top">80.90</td>
</tr>
<tr>
<td align="left" valign="top">SENet</td>
<td align="center" valign="top">87.00</td>
<td align="center" valign="top">83.35</td>
<td align="center" valign="top">84.80</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</article>
