<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "journalpublishing3.dtd">
<article xml:lang="en" article-type="research-article" xmlns:xlink="http://www.w3.org/1999/xlink">
<?release-delay 0|0?>
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">MCO</journal-id>
<journal-title-group>
<journal-title>Molecular and Clinical Oncology</journal-title>
</journal-title-group>
<issn pub-type="ppub">2049-9450</issn>
<issn pub-type="epub">2049-9469</issn>
<publisher>
<publisher-name>D.A. Spandidos</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3892/mco.2019.1932</article-id>
<article-id pub-id-type="publisher-id">MCO-0-0-1932</article-id>
<article-categories>
<subj-group>
<subject>Articles</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Miyagi</surname><given-names>Yasunari</given-names></name>
<xref rid="af1-mco-0-0-1932" ref-type="aff">1</xref>
<xref rid="af2-mco-0-0-1932" ref-type="aff">2</xref>
<xref rid="af3-mco-0-0-1932" ref-type="aff">3</xref>
<xref rid="c1-mco-0-0-1932" ref-type="corresp"/></contrib>
<contrib contrib-type="author"><name><surname>Takehara</surname><given-names>Kazuhiro</given-names></name>
<xref rid="af4-mco-0-0-1932" ref-type="aff">4</xref></contrib>
<contrib contrib-type="author"><name><surname>Miyake</surname><given-names>Takahito</given-names></name>
<xref rid="af5-mco-0-0-1932" ref-type="aff">5</xref></contrib>
</contrib-group>
<aff id="af1-mco-0-0-1932"><label>1</label>Medical Data Labo, Okayama 703-8267, Japan</aff>
<aff id="af2-mco-0-0-1932"><label>2</label>Department of Gynecologic Oncology, Saitama Medical University International Medical Center, Hidaka, Saitama 350-1298, Japan</aff>
<aff id="af3-mco-0-0-1932"><label>3</label>Department of Gynecology, Miyake Ofuku Clinic, Okayama 701-0204, Japan</aff>
<aff id="af4-mco-0-0-1932"><label>4</label>Department of Gynecologic Oncology, National Hospital Organization, Shikoku Cancer Center, Matsuyama, Ehime 791-0208, Japan</aff>
<aff id="af5-mco-0-0-1932"><label>5</label>Department of Obstetrics and Gynecology, Miyake Clinic, Okayama 701-0204, Japan</aff>
<author-notes>
<corresp id="c1-mco-0-0-1932"><italic>Correspondence to</italic>: Dr Yasunari Miyagi, Medical Data Labo, 289-48 Yamasaki, Naka Ward, Okayama 703-8267, Japan, E-mail: <email>ymiyagi@mac.com</email></corresp>
</author-notes>
<pub-date pub-type="ppub">
<month>12</month>
<year>2019</year></pub-date>
<pub-date pub-type="epub">
<day>04</day>
<month>10</month>
<year>2019</year></pub-date>
<volume>11</volume>
<issue>6</issue>
<fpage>583</fpage>
<lpage>589</lpage>
<history>
<date date-type="received"><day>26</day><month>04</month><year>2019</year></date>
<date date-type="accepted"><day>09</day><month>09</month><year>2019</year></date>
</history>
<permissions>
<copyright-statement>Copyright: &#x00A9; Miyagi et al.</copyright-statement>
<copyright-year>2019</copyright-year>
<license license-type="open-access">
<license-p>This is an open access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Creative Commons Attribution-NonCommercial-NoDerivs License</ext-link>, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.</license-p></license>
</permissions>
<abstract>
<p>The aim of the present study was to explore the feasibility of using deep learning as artificial intelligence (AI) to classify cervical squamous epithelial lesions (SIL) from colposcopy images. A total of 330 patients who underwent colposcopy and biopsy by gynecologic oncologists were enrolled in the current study. A total of 97 patients received a pathological diagnosis of low-grade SIL (LSIL) and 213 of high-grade SIL (HSIL). An original AI-classifier with 11 layers of the convolutional neural network was developed and trained. The accuracy, sensitivity, specificity and Youden&#x0027;s J index of the AI-classifier and oncologists for diagnosing HSIL were 0.823 and 0.797, 0.800 and 0.831, 0.882 and 0.773, and 0.682 and 0.604, respectively. The area under the receiver-operating characteristic curve was 0.826&#x00B1;0.052 (mean &#x00B1; standard error), and the 95&#x0025; confidence interval 0.721&#x2013;0.928. The optimal cut-off point was 0.692. Cohen&#x0027;s Kappa coefficient for AI and colposcopy was 0.437 (P&#x003C;0.0005). The AI-classifier performed better than oncologists, although not significantly. Although further study is required, the clinical use of AI for the classification of HSIL/LSIL from by colposcopy may be feasible.</p>
</abstract>
<kwd-group>
<kwd>colposcopy</kwd>
<kwd>cervical cancer</kwd>
<kwd>cervical intraepithelial neoplasia</kwd>
<kwd>deep learning</kwd>
<kwd>artificial intelligence</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro">
<title>Introduction</title>
<p>With current advancements in computer science, artificial intelligence (AI) has made remarkable progress recently. The hypothetical moment in time when AI becomes so advanced that humanity undergoes a dramatic and irreversible change (<xref rid="b1-mco-0-0-1932" ref-type="bibr">1</xref>) is likely to arrive in this century. AI has already exceeded human experts in the field of games with perfect information, such as Go (<xref rid="b2-mco-0-0-1932" ref-type="bibr">2</xref>), showing us novel tactics. Therefore, since AI can recognize some information that conventional procedures cannot detect, it may provide a more precise diagnosis in practical medicine. AI may also be able to support clinicians in practical medicine, reducing the time and effort necessary. The aim of the present study was to investigate the feasibility of applying deep learning, a type of AI, for gynecological clinical practice.</p>
<p>Uterine cervical cancer continues to be a major public health problem. Cervical cancer is the third most common female cancer and the leading cause of cancer-related mortality among women in Eastern, Western and Middle Africa, Central America, South-Central Asia and Melanesia. New methodologies of cervical cancer prevention should be made available and accessible to women of all countries (<xref rid="b3-mco-0-0-1932" ref-type="bibr">3</xref>).</p>
<p>Colposcopy is a well-established tool for examining the cervix under magnification (<xref rid="b4-mco-0-0-1932" ref-type="bibr">4</xref>&#x2013;<xref rid="b6-mco-0-0-1932" ref-type="bibr">6</xref>). When lesions are treated with acetic acid diluted to 3&#x2013;5&#x0025;, colposcopy can detect and recognize cervical intraepithelial neoplasia (CIN). Classification systems, such as the Bethesda System in 2002 are used to categorize lesions as high-grade squamous intraepithelial lesions (HSIL) or low-grade SIL (LSIL) (<xref rid="b7-mco-0-0-1932" ref-type="bibr">7</xref>,<xref rid="b8-mco-0-0-1932" ref-type="bibr">8</xref>). HSIL and LSIL were previously referred to as CIN2/CIN3 and CIN1, respectively. In clinical practice, it is important for clinicians to distinguish HSIL from LSIL in biopsy specimens, since further examination or treatment, such as conization, may be required for HSIL. Expert gynecologic oncologists spend much time and effort to provide precise colposcopy findings.</p>
<p>For these reasons, we explored whether AI can evaluate colposcopy findings as well as a gynecologic oncologist. In the present study, we applied deep learning with a convolutional neural network to the realm of AI to develop an original classifier for predicting HSIL or LSIL from colposcopy images. Deep learning is becoming very popular among machine learning methods, such as logistic regression (<xref rid="b9-mco-0-0-1932" ref-type="bibr">9</xref>), naive Bayes (<xref rid="b10-mco-0-0-1932" ref-type="bibr">10</xref>), nearest neighbor (<xref rid="b11-mco-0-0-1932" ref-type="bibr">11</xref>), random forest (<xref rid="b12-mco-0-0-1932" ref-type="bibr">12</xref>) and neural network (<xref rid="b13-mco-0-0-1932" ref-type="bibr">13</xref>). The classifier program was developed using supervised deep learning with a convolutional neural network (<xref rid="b14-mco-0-0-1932" ref-type="bibr">14</xref>) that tried to mimic the visual cortex of the mammal brain (<xref rid="b15-mco-0-0-1932" ref-type="bibr">15</xref>&#x2013;<xref rid="b23-mco-0-0-1932" ref-type="bibr">23</xref>), in order to categorize colposcopy images as either HSIL or LSIL. The present study demonstrated the effective use of the AI colposcopy image classifier in predicting HSIL or LSIL by comparing the colposcopic diagnosis to that of gynecologic oncologists.</p>
</sec>
<sec sec-type="subjects|methods">
<title>Patients and methods</title>
<sec>
<title/>
<sec>
<title>Patients</title>
<p>This retrospective study used fully deidentified patient data and was approved by the Institutional Review Board of Shikoku Cancer Center (approval no. 2017-81). This study was explained to the patients, who underwent cervical biopsy by gynecologic oncologists at Shikoku Cancer Center from January 1, 2012 to December 31, 2017. Patients were also directed to a website with additional information, including an opt-out option for the study. The Institutional Review Board of Shikoku Cancer Center approved the opt-out option for patients to choose to withdraw from this study. Gynecologic oncologists at the Shikoku Cancer Center determined the biopsy in routine conventional practice when necessary. A total of 330 patients were enrolled in this study.</p>
</sec>
<sec>
<title>Images</title>
<p>Colposcopy images of lesions processed with acetic acid prior to biopsy were captured, cropped to a square and saved in JPEG format. The images were used retrospectively as the input data for deep learning. Gynecologists biopsied the most advanced lesion, the pathological results of which were revealed later.</p>
</sec>
<sec>
<title>Preparation for AI</title>
<p>All deidentified images stored offline were transferred to our AI-based system. Each image was cropped to a square and then saved. Twenty percent of the images were randomly selected as the test dataset, and the rest were used as the training dataset. Next, 20&#x0025; of the training dataset was used as the validation dataset, and the rest was used to train the AI-classifier. Thus, the training, validation and test datasets did not overlap. That way, the AI classifier was trained by a training dataset and simultaneously validated and then tested for the test dataset. The number of training datasets was augmented, as is often done in computer science, in a process known as data augmentation. The training dataset was augmented in this study because the image processing of the arbitrary degrees of rotation can lead to images being included in the same category of different vector data.</p>
</sec>
<sec>
<title>AI-classifier</title>
<p>We developed classifier programs using supervised deep learning with a convolutional neural network (<xref rid="b14-mco-0-0-1932" ref-type="bibr">14</xref>,<xref rid="b19-mco-0-0-1932" ref-type="bibr">19</xref>). We tested a lot of convolutional neural networks by varying L2 regularization (<xref rid="b24-mco-0-0-1932" ref-type="bibr">24</xref>,<xref rid="b25-mco-0-0-1932" ref-type="bibr">25</xref>) and the architectures consisted of a combination of convolution layers with kernels (<xref rid="b26-mco-0-0-1932" ref-type="bibr">26</xref>&#x2013;<xref rid="b28-mco-0-0-1932" ref-type="bibr">28</xref>), pooling layers (<xref rid="b29-mco-0-0-1932" ref-type="bibr">29</xref>&#x2013;<xref rid="b32-mco-0-0-1932" ref-type="bibr">32</xref>), flattened layers (<xref rid="b33-mco-0-0-1932" ref-type="bibr">33</xref>), linear layers (<xref rid="b34-mco-0-0-1932" ref-type="bibr">34</xref>,<xref rid="b35-mco-0-0-1932" ref-type="bibr">35</xref>), rectified linear unit layers (<xref rid="b36-mco-0-0-1932" ref-type="bibr">36</xref>,<xref rid="b37-mco-0-0-1932" ref-type="bibr">37</xref>) and a softmax layer (<xref rid="b38-mco-0-0-1932" ref-type="bibr">38</xref>,<xref rid="b39-mco-0-0-1932" ref-type="bibr">39</xref>) that demonstrated the probability of LSIL or HSIL from an image (<xref rid="tI-mco-0-0-1932" ref-type="table">Table I</xref>). We also tested the ResNet-50 network (<xref rid="b40-mco-0-0-1932" ref-type="bibr">40</xref>), which performed very well in the ImageNet Large Scale Visual Recognition Challenge (<xref rid="b41-mco-0-0-1932" ref-type="bibr">41</xref>) and Microsoft common objects in context MS-COCO (<xref rid="b42-mco-0-0-1932" ref-type="bibr">42</xref>) competition. We modified the ResNet-50, the first layer of which was replaced with the convolutional layer with a kernel size of 4, stride size of 2, padding size of 2, and input image size of 111&#x00D7;111 pixels, which is the minimum size for the ResNet-50. The last layer of the ResNet-50 was also replaced with a linear layer, followed by a softmax layer with an output with a vector size of 2.</p>
<p>Cross-validation (<xref rid="b43-mco-0-0-1932" ref-type="bibr">43</xref>&#x2013;<xref rid="b45-mco-0-0-1932" ref-type="bibr">45</xref>), a powerful method for model selection, was applied to identify the optimal method of machine learning. The suitable number of images for the training data was investigated by evaluating the accuracy and variances using the 5-fold cross-validation method. This calculation procedure reveals the optimal number of training data and can be used to avoid overfitting (<xref rid="b46-mco-0-0-1932" ref-type="bibr">46</xref>&#x2013;<xref rid="b51-mco-0-0-1932" ref-type="bibr">51</xref>), which is a modeling error that occurs when a classifier is too closely fit to a limited set of data points. After the optimal number of training data was obtained, the classifier that showed the best accuracy was selected, as is standard practice in computer science. Conventional colposcopy diagnosis and AI colposcopy diagnosis for test dataset were compared.</p>
</sec>
<sec>
<title>Development environment</title>
<p>The following development environment was used in the present study: A Mac running OS X 10.14.3 (Apple, Inc.) and Mathematica 11.3.0.0 (Wolfram Research).</p>
</sec>
<sec>
<title>Statistical analysis</title>
<p>The laboratory and AI-classifier data were compared. The two proportion between gynecologic oncologists and the classifier using deep learning was compared by two-proportion z-test. The agreements among the conventional colposcopy, the AI classifier and pathological results were evaluated by Cohen&#x0027;s Kappa (<xref rid="b52-mco-0-0-1932" ref-type="bibr">52</xref>) coefficients. The formula to calculate Cohen&#x0027;s kappa for two raters is as follows:</p>
<p>(A<sub>observed</sub> - A<sub>expected by chance</sub>)/(1- A<sub>expected by chance</sub>)</p>
<p>where:</p>
<p>A<sub>observed</sub> = the relative observed agreement among raters,</p>
<p>A<sub>expected by chance</sub> = the hypothetical probability of chance agreement. Mathematica 11.3.0.0 (Wolfram Research) was used for all statistical analyses.</p>
</sec>
</sec>
</sec>
<sec sec-type="results">
<title>Results</title>
<sec>
<title/>
<sec>
<title>Profiles of pathological diagnosis and colposcopy</title>
<p>The pathological diagnoses and corresponding number of patients were as follows: LSIL, 97; HSIL, 213; squamous cell carcinoma, 12; microinvasive squamous cell carcinoma, 1; adenocarcinoma, 5; adenocarcinoma in situ, 2. A total of 310 images of both pathological LSIL and HSIL were used, due of the limited number of available images. Among the 213 pathological HSIL cases, 177, 32, 3 and 1 received a conventional colposcopy diagnosis by gynecologists of HSIL, LSIL, invasive cancer and cervicitis, respectively. Among the 97 pathological LSIL cases, 22, 70 and 5 received a conventional colposcopy diagnosis by gynecologists of HSIL, LSIL and cervicitis, respectively (<xref rid="tII-mco-0-0-1932" ref-type="table">Table II</xref>) The accuracy, sensitivity, specificity, positive predictive value, negative predictive value and Youden&#x0027;s J index (<xref rid="b53-mco-0-0-1932" ref-type="bibr">53</xref>) for HSIL, as determined by gynecologists were 0.797 (247/310), 0.831 (177/213), 0.773 (75/97), 0.889 (117/199), 0.686 (70/102) and 0.604, respectively (<xref rid="tIII-mco-0-0-1932" ref-type="table">Table III</xref>).</p>
</sec>
<sec>
<title>AI-classifier results</title>
<p>The best accuracy for HSIL was 0.823 (51/62), when the number of the augmented training data set was 1,488, the value of L2 regularization 0.175, the number of layers of the architecture 11 (<xref rid="tII-mco-0-0-1932" ref-type="table">Table II</xref>) and the image size 70&#x00D7;70 pixels. The sensitivity, specificity, positive predictive value, negative predictive value and Youden&#x0027;s J index were 0.800 (36/45), 0.882 (15/17), 0.947 (36/38), 0.625 (15/24) and 0.682, respectively (<xref rid="tIII-mco-0-0-1932" ref-type="table">Table III</xref>). The accuracy, sensitivity, specificity, positive predictive value, negative predictive value and Youden&#x0027;s J index of the best modified ResNet-50 were 0.790 (49/62), 0.847 (39/46), 0.625 (10/16), 0.867 (39/45), 0.588 (10/17) and 0.472, respectively. The original conventional neural network was better than the modified ResNet-50, although not significantly. There were no differences between the gynecologic oncologists and the best AI in accuracy, sensitivity, specificity, positive or negative predictive value, as determined by a proportional test.</p>
<p>Using confidence score, the area under the receiver-operating characteristic curve (ROC) of the best classifier for predicting HSIL was 0.824&#x00B1;0.052 (mean &#x00B1; SE), and the 95&#x0025; confidence interval 0.721&#x2013;0.928. The ROC curve is shown in <xref rid="f1-mco-0-0-1932" ref-type="fig">Fig. 1</xref>. The optimal cut-off point was 0.692.</p>
</sec>
<sec>
<title>Comparison of AI-classifier with conventional colposcopy</title>
<p>The association among conventional colposcopy, the AI classifier and pathological results for the test dataset that was 20&#x0025; of patients of both pathological HSIL and LSIL diagnosed by punch biopsy are shown in <xref rid="tIV-mco-0-0-1932" ref-type="table">Tables IV</xref>&#x2013;<xref rid="tVI-mco-0-0-1932" ref-type="table">VI</xref>. Cohen&#x0027;s Kappa (<xref rid="b52-mco-0-0-1932" ref-type="bibr">52</xref>) coefficients of the conventional colposcopy and pathological results, the AI classifier and pathological results, and the conventional colposcopy and the AI classifier were 0.691, 0.561 and 0.437 (all P&#x003C;0.0001), respectively. All were more than moderate agreements (<xref rid="b54-mco-0-0-1932" ref-type="bibr">54</xref>). Conventional colposcopy showed better agreement than the AI, although the difference was not significant. Classification took less than 0.2 sec per image.</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion">
<title>Discussion</title>
<p>We developed a classifier using deep learning with convolutional neural networks using images of cervical SILs to predict the pathological diagnosis. In the present study, the accuracy values achieved by the classifier and by gynecologic oncologists were 0.823 and 0.797, respectively (<xref rid="tIII-mco-0-0-1932" ref-type="table">Table III</xref>). The sensitivity values of the classifier and gynecologic oncologists were 0.800 and 0.831, respectively. The specificity values of the classifier and gynecologic oncologists were 0.882 and 0.773, respectively. The accuracy and specificity of the classifier were superior to those of gynecologic oncologists, although the difference was not significant. Only moderate agreement was obtained between conventional colposcopy diagnosis and AI colposcopy diagnosis with the Kappa value of 0.437. McHugh reported that Cohen suggested 0.41 might be acceptable and the Kappa result be interpreted as follows: Values &#x2264;0 as indicating no agreement and 0.01&#x2013;0.20 as none to slight, 0.21&#x2013;0.40 as fair, 0.41&#x2013;0.60 as moderate, 0.61&#x2013;0.80 as substantial and 0.81&#x2013;1.00 (<xref rid="b54-mco-0-0-1932" ref-type="bibr">54</xref>). Thus, the Kappa value of 0.437 was acceptable. But, AI colposcopy that might have a potential would not be an alternative to conventional colposcopy without further studies.</p>
<p>Several reports have used AI (<xref rid="b55-mco-0-0-1932" ref-type="bibr">55</xref>) for deep learning with convolutional neural networks in medicine (<xref rid="b56-mco-0-0-1932" ref-type="bibr">56</xref>). The accuracy values of this method with deep learning have been published and include 0.997 for the histopathological diagnosis of breast cancer (<xref rid="b57-mco-0-0-1932" ref-type="bibr">57</xref>), 0.90&#x2013;0.83 for the early diagnosis of Alzheimer&#x0027;s disease (<xref rid="b58-mco-0-0-1932" ref-type="bibr">58</xref>), 0.83 for urological dysfunctions (<xref rid="b59-mco-0-0-1932" ref-type="bibr">59</xref>), 0.72 (<xref rid="b60-mco-0-0-1932" ref-type="bibr">60</xref>) and 0.50 (<xref rid="b61-mco-0-0-1932" ref-type="bibr">61</xref>) for colposcopy, 0.68&#x2013;0.70 for localization of rectal cancer (<xref rid="b62-mco-0-0-1932" ref-type="bibr">62</xref>), 0.83 for the diagnostic imaging of orthopedic trauma (<xref rid="b63-mco-0-0-1932" ref-type="bibr">63</xref>), 0.98 for the morphological quality of blastocysts and evaluation by an embryologist (<xref rid="b64-mco-0-0-1932" ref-type="bibr">64</xref>), 0.65 for predicting live birth without aneuploidy from a blastocyst image (<xref rid="b65-mco-0-0-1932" ref-type="bibr">65</xref>) and 0.64&#x2013;0.88 for predicting live birth from a blastocyst image of patients by age (<xref rid="b66-mco-0-0-1932" ref-type="bibr">66</xref>).</p>
<p>Several studies have reported a limitation of conventional colposcopy. An investigation of the accuracy of colposcopically-directed biopsy reported a total biopsy failure rate, comprising both non-biopsy and incorrect selection of biopsy site, of 0.20 in CIN1, 0.11 in CIN2 and 0.09 in CIN3 (<xref rid="b67-mco-0-0-1932" ref-type="bibr">67</xref>). The colposcopic impression of high-grade CIN had a sensitivity of 0.54 and a specificity of 0.88, as determined by 9 expert colposcopists in 100 cervigrams (<xref rid="b68-mco-0-0-1932" ref-type="bibr">68</xref>). The sensitivity of an online colpophotographic assessment of high-grade disease (CIN2 and CIN3) by 20 colposcopists was 0.39 (<xref rid="b69-mco-0-0-1932" ref-type="bibr">69</xref>). Thus, conventional colposcopy does not provide good sensitivity, even by colposcopists. By contrast, the accuracy and sensitivity reported in this study for predicting HSIL from colposcopy images using deep learning was 0.823 and 0.800, respectively, which appears favorable. Since the classifier was not trained in colposcopy findings such as acetowhite epithelium, mosaic, punctuation, it may recognize some features of cervical SILs by itself in high-dimensional space. It is possible that the AI-classifier may recognize features that colposcopists do not, such as complexity of the shape of the lesion, relative or absolute brightness of acetowhite, distribution of punctuation density, and quantitative marginal evaluation of borders. The pathological results were obtained and defined by punch biopsy in this study, as it was not ethically recommended for patients with LSIL (CIN1) diagnosed by colposcopy to undergo conization or hysterectomy. If the pathological results were defined by conization or hysterectomy, the advanced lesion would have been be revealed, and thus both conventional colposcopy and the AI classifier may have demonstrated different results. In the present study, we only tried to compare the effectiveness of AI with that of conventional colposcopy for SIL. When the AI will be used to predict more advanced diseases, such as squamous cell carcinoma, adenocarcinoma and AIS, the pathological diagnosis should be provided by not punch biopsy but conization or hysterectomy.</p>
<p>In clinical practice, it is important for clinicians to distinguish HSIL from LSIL in biopsy specimens. Further examination or treatment, such as conization, may be required for HSIL. When a reliable classifier indicates HSIL from colposcopy images in clinical practice, the clinician should consider biopsy. The accuracy values of the classifier and gynecologists for detecting HSIL were 0.823 and 0.797, respectively. The classifier might help untrained clinicians to avoid or reduce the risks of overlooking HSIL. When the AI-classifier can perform higher in terms of accuracy, sensitivity and specificity for classifying HSIL/LSIL, clinicians will be able to perform more precise practice, referencing AI. Furthermore, a gynecologist could reduce the time and effort it takes to become a colposcopy expert and, as a result, improve in other skills, training and activities.</p>
<p>The architecture of the neural network has progressed. The LeNet study published in 1998 (<xref rid="b70-mco-0-0-1932" ref-type="bibr">70</xref>) consisted of 5 layers. AlexNet, published in 2012 (<xref rid="b38-mco-0-0-1932" ref-type="bibr">38</xref>), consisted of 14 and Google Net, published in 2014 (<xref rid="b35-mco-0-0-1932" ref-type="bibr">35</xref>), was constructed from a combination of micro networks. ResNet-50, published in 2015 (<xref rid="b41-mco-0-0-1932" ref-type="bibr">41</xref>), consisted of modules with a shortcut process. The Squeeze-and-Excitation Networks were published in 2017 (<xref rid="b71-mco-0-0-1932" ref-type="bibr">71</xref>). AI used for image recognition is still being developed. Progress in AI will allow us to achieve better results. Image information is one of the parameters that need to be investigated. Only 15&#x00D7;15 pixels are used to detect cervical cancer (<xref rid="b72-mco-0-0-1932" ref-type="bibr">72</xref>). In a colposcopy study (<xref rid="b61-mco-0-0-1932" ref-type="bibr">61</xref>), it was reported that the accuracy for images of 150&#x00D7;150 pixels was better than those for 32&#x00D7;32 or 300&#x00D7;300 pixels. Hence, image size remains an issue. We used 70&#x00D7;70 and 111&#x00D7;111 pixels for our images, in order to use the original neural networks and the modified ResNet-50, respectively. The original conventional neural network was better than the modified ResNet-50, although not significantly. We believe that a pixel size of 70&#x00D7;70 falls within the acceptable range. Regularization values are also important parameters for constructing a good classifier that avoids overfitting. If the regularization value is too low, overfitting occurs. If the value is too large, the classifier will not be trained well. Choosing the appropriate number of training datasets is also very important. If the number of training datasets is too high, the accuracy will be lower and more variances will be observed. The validation dataset, as well as L2 regularization, also prevent overfitting. The appropriate number of training datasets must be achieved to obtain a good classifier. More varied patterns of images may be needed for datasets. Ordinarily, 500&#x2013;1,000 images are prepared for each class during image classification with deep learning (<xref rid="b61-mco-0-0-1932" ref-type="bibr">61</xref>,<xref rid="b73-mco-0-0-1932" ref-type="bibr">73</xref>). Such a large number of datasets will improve the accuracy and specificity of the classifier with deep learning.</p>
<p>In the present study, a classifier was developed based on deep learning, which used images of uterine cervical SILs to predict pathological HSIL/LSIL. Its accuracy was 0.823. Although further study may be required to validate the classifier, we demonstrated that AI may have a clinical use in colposcopic examinations and may provide benefits to both patients and medical personnel.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>Not applicable.</p>
</ack>
<sec>
<title>Funding</title>
<p>No funding was received.</p>
</sec>
<sec>
<title>Availability of data and materials</title>
<p>The datasets generated and/or analyzed during the present study are not publicly available, since data sharing is not approved by the Institutional Review Board of Shikoku Cancer Center (approval no. 2017-81).</p>
</sec>
<sec>
<title>Authors&#x0027; contributions</title>
<p>YM designed the current study, performed AI programming, produced classifiers by AI, performed statistical analysis and wrote the manuscript. KT performed clinical intervention, data entry and collection, designed the current study, and critically revised the manuscript. TM designed the current study and critically revised the manuscript.</p>
</sec>
<sec>
<title>Ethics approval and consent to participate</title>
<p>The protocol for the present retrospective study used fully deidentified patient data and was approved by the Institutional Review Board of Shikoku Cancer Center (approval no. 2017-81). The study protocol was explained to the patients who underwent cervical biopsy at the Shikoku Cancer Center from January 1, 2012 to December 31, 2017. Patients were also directed to a website with additional information, including an opt-out option, allowing them to not participate. Written informed consent for was not required, according to the guidance of the Ministry of Education, Culture, Sports, Science and Technology of Japan.</p>
</sec>
<sec>
<title>Patient consent for publication</title>
<p>The current study was explained to the patients who underwent cervical biopsy at the Shikoku Cancer Center from January 1, 2012 to December 31, 2017. Patients were also directed to a website with additional information, including an opt-out option that let them know they had the right to refuse publication.</p>
</sec>
<sec>
<title>Competing interests</title>
<p>YM and TM declare that they have no competing interests. KT reports personal fees from Taiho Pharmaceuticals, Chugai Pharma, AstraZeneca, Nippon Kayaku, Eisai, Ono Pharmaceutical, Terumo Corporation and Daiichi Sankyo, outside of the submitted work.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="b1-mco-0-0-1932"><label>1</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>M&#x00FC;ller</surname><given-names>VC</given-names></name><name><surname>Bostrom</surname><given-names>N</given-names></name></person-group><chapter-title>Future progress in artificial intelligence: A survey of expert opinion. In: Fundamental Issues of Artificial Intelligence</chapter-title><publisher-name>Springer</publisher-name><publisher-loc>Berlin</publisher-loc><fpage>555</fpage><lpage>572</lpage><year>2016</year></element-citation></ref>
<ref id="b2-mco-0-0-1932"><label>2</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Silver</surname><given-names>D</given-names></name><name><surname>Schrittwieser</surname><given-names>J</given-names></name><name><surname>Simonyan</surname><given-names>K</given-names></name><name><surname>Antonoglou</surname><given-names>I</given-names></name><name><surname>Huang</surname><given-names>A</given-names></name><name><surname>Guez</surname><given-names>A</given-names></name><name><surname>Hubert</surname><given-names>T</given-names></name><name><surname>Baker</surname><given-names>L</given-names></name><name><surname>Lai</surname><given-names>M</given-names></name><name><surname>Bolton</surname><given-names>A</given-names></name><etal/></person-group><article-title>Mastering the game of Go without human knowledge</article-title><source>Nature</source><volume>550</volume><fpage>354</fpage><lpage>359</lpage><year>2017</year><pub-id pub-id-type="doi">10.1038/nature24270</pub-id><pub-id pub-id-type="pmid">29052630</pub-id></element-citation></ref>
<ref id="b3-mco-0-0-1932"><label>3</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Arbyn</surname><given-names>M</given-names></name><name><surname>Castellsagu&#x00E9;</surname><given-names>X</given-names></name><name><surname>de Sanjos&#x00E9;</surname><given-names>S</given-names></name><name><surname>Bruni</surname><given-names>L</given-names></name><name><surname>Saraiya</surname><given-names>M</given-names></name><name><surname>Bray</surname><given-names>F</given-names></name><name><surname>Ferlay</surname><given-names>J</given-names></name></person-group><article-title>Worldwide burden of cervical cancer in 2008</article-title><source>Ann Oncol</source><volume>22</volume><fpage>2675</fpage><lpage>2686</lpage><year>2011</year><pub-id pub-id-type="doi">10.1093/annonc/mdr015</pub-id><pub-id pub-id-type="pmid">21471563</pub-id></element-citation></ref>
<ref id="b4-mco-0-0-1932"><label>4</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Garcia-Arteaga</surname><given-names>JD</given-names></name><name><surname>Kybic</surname><given-names>J</given-names></name><name><surname>Li</surname><given-names>W</given-names></name></person-group><article-title>Automatic colposcopy video tissue classification using higher order entropy-based image registration</article-title><source>Comput Biol Med</source><volume>41</volume><fpage>960</fpage><lpage>970</lpage><year>2011</year><pub-id pub-id-type="doi">10.1016/j.compbiomed.2011.07.010</pub-id><pub-id pub-id-type="pmid">21890126</pub-id></element-citation></ref>
<ref id="b5-mco-0-0-1932"><label>5</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kyrgiou</surname><given-names>M</given-names></name><name><surname>Tsoumpou</surname><given-names>I</given-names></name><name><surname>Vrekoussis</surname><given-names>T</given-names></name><name><surname>Martin-Hirsch</surname><given-names>P</given-names></name><name><surname>Arbyn</surname><given-names>M</given-names></name><name><surname>Prendiville</surname><given-names>W</given-names></name><name><surname>Mitrou</surname><given-names>S</given-names></name><name><surname>Koliopoulos</surname><given-names>G</given-names></name><name><surname>Dalkalitsis</surname><given-names>N</given-names></name><name><surname>Stamatopoulos</surname><given-names>P</given-names></name><name><surname>Paraskevaidis</surname><given-names>E</given-names></name></person-group><article-title>The up-to-date evidence on colposcopy practice and treatment of cervical intraepithelial neoplasia: The Cochrane colposcopy and cervical cytopathology collaborative group (C5 group) approach</article-title><source>Cancer Treat Rev</source><volume>32</volume><fpage>516</fpage><lpage>523</lpage><year>2006</year><pub-id pub-id-type="doi">10.1016/j.ctrv.2006.07.008</pub-id><pub-id pub-id-type="pmid">17008015</pub-id></element-citation></ref>
<ref id="b6-mco-0-0-1932"><label>6</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>O&#x0027;Neill</surname><given-names>E</given-names></name><name><surname>Reeves</surname><given-names>MF</given-names></name><name><surname>Creinin</surname><given-names>MD</given-names></name></person-group><article-title>Baseline colposcopic findings in women entering studies on female vaginal products</article-title><source>Contraception</source><volume>78</volume><fpage>162</fpage><lpage>166</lpage><year>2008</year><pub-id pub-id-type="doi">10.1016/j.contraception.2008.04.002</pub-id><pub-id pub-id-type="pmid">18672119</pub-id></element-citation></ref>
<ref id="b7-mco-0-0-1932"><label>7</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Waxman</surname><given-names>AG</given-names></name><name><surname>Chelmow</surname><given-names>D</given-names></name><name><surname>Darragh</surname><given-names>TM</given-names></name><name><surname>Lawson</surname><given-names>H</given-names></name><name><surname>Moscicki</surname><given-names>AB</given-names></name></person-group><article-title>Revised terminology for cervical histopathology and its implications for management of high-grade squamous intraepithelial lesions of the cervix</article-title><source>Obstet Gynecol</source><volume>120</volume><fpage>1465</fpage><lpage>1471</lpage><year>2012</year><pub-id pub-id-type="doi">10.1097/AOG.0b013e31827001d5</pub-id><pub-id pub-id-type="pmid">23168774</pub-id></element-citation></ref>
<ref id="b8-mco-0-0-1932"><label>8</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Darragh</surname><given-names>TM</given-names></name><name><surname>Colgan</surname><given-names>TJ</given-names></name><name><surname>Cox</surname><given-names>JT</given-names></name><name><surname>Heller</surname><given-names>DS</given-names></name><name><surname>Henry</surname><given-names>MR</given-names></name><name><surname>Luff</surname><given-names>RD</given-names></name><name><surname>McCalmont</surname><given-names>T</given-names></name><name><surname>Nayar</surname><given-names>R</given-names></name><name><surname>Palefsky</surname><given-names>JM</given-names></name><name><surname>Stoler</surname><given-names>MH</given-names></name><etal/></person-group><article-title>The lower anogenital squamous terminology standardization project for HPV-associated lesions: Background and consensus recommendations from the college of American pathologists and the American society for colposcopy and cervical pathology</article-title><source>J Low Genit Tract Dis</source><volume>16</volume><fpage>205</fpage><lpage>242</lpage><year>2012</year><pub-id pub-id-type="doi">10.1097/LGT.0b013e31825c31dd</pub-id><pub-id pub-id-type="pmid">22820980</pub-id></element-citation></ref>
<ref id="b9-mco-0-0-1932"><label>9</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dreiseitl</surname><given-names>S</given-names></name><name><surname>Ohno-Machado</surname><given-names>L</given-names></name></person-group><article-title>Logistic regression and artificial neural network classification models: A methodology review</article-title><source>J Biomed Inform</source><volume>35</volume><fpage>352</fpage><lpage>359</lpage><year>2002</year><pub-id pub-id-type="doi">10.1016/S1532-0464(03)00034-0</pub-id><pub-id pub-id-type="pmid">12968784</pub-id></element-citation></ref>
<ref id="b10-mco-0-0-1932"><label>10</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ben-Bassat</surname><given-names>M</given-names></name><name><surname>Klove</surname><given-names>KL</given-names></name><name><surname>Weil</surname><given-names>MH</given-names></name></person-group><article-title>Sensitivity analysis in Bayesian classification models: Multiplicative deviations</article-title><source>IEEE Trans Pattern Anal Mach Intell</source><volume>2</volume><fpage>261</fpage><lpage>266</lpage><year>1980</year><pub-id pub-id-type="doi">10.1109/TPAMI.1980.4767015</pub-id><pub-id pub-id-type="pmid">21868901</pub-id></element-citation></ref>
<ref id="b11-mco-0-0-1932"><label>11</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Friedman</surname><given-names>JH</given-names></name><name><surname>Baskett</surname><given-names>F</given-names></name><name><surname>Shustek</surname><given-names>LJ</given-names></name></person-group><article-title>An algorithm for finding nearest neighbors</article-title><source>IEEE Trans Comput</source><volume>24</volume><fpage>1000</fpage><lpage>1006</lpage><year>1975</year><pub-id pub-id-type="doi">10.1109/T-C.1975.224110</pub-id></element-citation></ref>
<ref id="b12-mco-0-0-1932"><label>12</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Breiman</surname><given-names>L</given-names></name></person-group><article-title>Random forests</article-title><source>Mach Lean</source><volume>45</volume><fpage>5</fpage><lpage>32</lpage><year>2001</year><pub-id pub-id-type="doi">10.1023/A:1010933404324</pub-id></element-citation></ref>
<ref id="b13-mco-0-0-1932"><label>13</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rumelhart</surname><given-names>D</given-names></name><name><surname>Hinton</surname><given-names>G</given-names></name><name><surname>Williams</surname><given-names>R</given-names></name></person-group><article-title>Learning representations by back-propagating errors</article-title><source>Nature</source><volume>323</volume><fpage>533</fpage><lpage>536</lpage><year>1986</year><pub-id pub-id-type="doi">10.1038/323533a0</pub-id></element-citation></ref>
<ref id="b14-mco-0-0-1932"><label>14</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bengio</surname><given-names>Y</given-names></name><name><surname>Courville</surname><given-names>A</given-names></name><name><surname>Vincent</surname><given-names>P</given-names></name></person-group><article-title>Representation learning: A review and new perspectives</article-title><source>IEEE Trans Pattern Anal Mach Intell</source><volume>35</volume><issue>1</issue><fpage>798</fpage><lpage>828</lpage><year>2013</year></element-citation></ref>
<ref id="b15-mco-0-0-1932"><label>15</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fukushima</surname><given-names>K</given-names></name></person-group><article-title>Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position</article-title><source>Biol Cybern</source><volume>36</volume><fpage>193</fpage><lpage>202</lpage><year>1980</year><pub-id pub-id-type="doi">10.1007/BF00344251</pub-id><pub-id pub-id-type="pmid">7370364</pub-id></element-citation></ref>
<ref id="b16-mco-0-0-1932"><label>16</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hubel</surname><given-names>DH</given-names></name><name><surname>Wiesel</surname><given-names>TN</given-names></name></person-group><article-title>Receptive fields and functional architecture of monkey striate cortex</article-title><source>J Physiol</source><volume>195</volume><fpage>215</fpage><lpage>243</lpage><year>1968</year><pub-id pub-id-type="doi">10.1113/jphysiol.1968.sp008455</pub-id><pub-id pub-id-type="pmid">4966457</pub-id></element-citation></ref>
<ref id="b17-mco-0-0-1932"><label>17</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hubel</surname><given-names>DH</given-names></name><name><surname>Wiesel</surname><given-names>TN</given-names></name></person-group><article-title>Receptive fields of single neurones in the cat&#x0027;s striate cortex</article-title><source>J Physiol</source><volume>148</volume><fpage>574</fpage><lpage>591</lpage><year>1959</year><pub-id pub-id-type="doi">10.1113/jphysiol.1959.sp006308</pub-id><pub-id pub-id-type="pmid">14403679</pub-id></element-citation></ref>
<ref id="b18-mco-0-0-1932"><label>18</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schmidhuber</surname><given-names>J</given-names></name></person-group><article-title>Deep learning in neural networks: An overview</article-title><source>Neural Netw</source><volume>61</volume><fpage>85</fpage><lpage>117</lpage><year>2015</year><pub-id pub-id-type="doi">10.1016/j.neunet.2014.09.003</pub-id><pub-id pub-id-type="pmid">25462637</pub-id></element-citation></ref>
<ref id="b19-mco-0-0-1932"><label>19</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>LeCun</surname><given-names>Y</given-names></name><name><surname>Bottou</surname><given-names>L</given-names></name><name><surname>Orr</surname><given-names>GB</given-names></name><name><surname>M&#x00FC;ller</surname><given-names>KR</given-names></name></person-group><chapter-title>Efficient BackProp</chapter-title><source>Neural Networks: Tricks of the Trade</source><publisher-name>Springer</publisher-name><publisher-loc>Berlin</publisher-loc><year>1998</year></element-citation></ref>
<ref id="b20-mco-0-0-1932"><label>20</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>LeCun</surname><given-names>Y</given-names></name><name><surname>Bottou</surname><given-names>L</given-names></name><name><surname>Bengio</surname><given-names>Y</given-names></name><name><surname>Haffner</surname><given-names>P</given-names></name></person-group><article-title>Gradient-based learning applied to document recognition</article-title><source>Proc IEEE</source><volume>86</volume><fpage>2278</fpage><lpage>2324</lpage><year>1998</year><pub-id pub-id-type="doi">10.1109/5.726791</pub-id></element-citation></ref>
<ref id="b21-mco-0-0-1932"><label>21</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>LeCun</surname><given-names>Y</given-names></name><name><surname>Boser</surname><given-names>B</given-names></name><name><surname>Denker</surname><given-names>JS</given-names></name><name><surname>Henderson</surname><given-names>D</given-names></name><name><surname>Howard</surname><given-names>RE</given-names></name><name><surname>Hubbard</surname><given-names>W</given-names></name><name><surname>Jackel</surname><given-names>LD</given-names></name></person-group><article-title>Backpropagation applied to handwritten zip code recognition</article-title><source>Neural Computation</source><volume>1</volume><fpage>541</fpage><lpage>551</lpage><year>1989</year><pub-id pub-id-type="doi">10.1162/neco.1989.1.4.541</pub-id></element-citation></ref>
<ref id="b22-mco-0-0-1932"><label>22</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Serre</surname><given-names>T</given-names></name><name><surname>Wolf</surname><given-names>L</given-names></name><name><surname>Bileschi</surname><given-names>S</given-names></name><name><surname>Riesenhuber</surname><given-names>M</given-names></name><name><surname>Poggio</surname><given-names>T</given-names></name></person-group><article-title>Robust object recognition with cortex-like mechanisms</article-title><source>IEEE Trans Pattern Anal Mach Intell</source><volume>29</volume><fpage>411</fpage><lpage>426</lpage><year>2007</year><pub-id pub-id-type="doi">10.1109/TPAMI.2007.56</pub-id><pub-id pub-id-type="pmid">17224612</pub-id></element-citation></ref>
<ref id="b23-mco-0-0-1932"><label>23</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wiatowski</surname><given-names>T</given-names></name><name><surname>B&#x00F6;lcskei</surname><given-names>H</given-names></name></person-group><article-title>A mathematical theory of deep convolutional neural networks for feature extraction</article-title><source>IEEE Trans Inf Theory</source><volume>64</volume><fpage>1845</fpage><lpage>1866</lpage><year>2018</year><pub-id pub-id-type="doi">10.1109/TIT.2017.2756880</pub-id></element-citation></ref>
<ref id="b24-mco-0-0-1932"><label>24</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Srivastava</surname><given-names>N</given-names></name><name><surname>Hinton</surname><given-names>G</given-names></name><name><surname>Krizhevsky</surname><given-names>A</given-names></name><name><surname>Sutskever</surname><given-names>I</given-names></name><name><surname>Salakhutdinov</surname><given-names>R</given-names></name></person-group><article-title>Dropout: A simple way to prevent neural networks from overfitting</article-title><source>J Mach Lean Res</source><volume>15</volume><fpage>1929</fpage><lpage>1958</lpage><year>2014</year></element-citation></ref>
<ref id="b25-mco-0-0-1932"><label>25</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nowlan</surname><given-names>SJ</given-names></name><name><surname>Hinton</surname><given-names>GE</given-names></name></person-group><article-title>Simplifying neural networks by soft weight-sharing</article-title><source>Neural Comput</source><volume>4</volume><fpage>473</fpage><lpage>493</lpage><year>1992</year><pub-id pub-id-type="doi">10.1162/neco.1992.4.4.473</pub-id></element-citation></ref>
<ref id="b26-mco-0-0-1932"><label>26</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bengio</surname><given-names>Y</given-names></name></person-group><article-title>Learning deep architectures for AI</article-title><source>Found Trends Mach Lean</source><volume>2</volume><fpage>1</fpage><lpage>127</lpage><year>2009</year><pub-id pub-id-type="doi">10.1561/2200000006</pub-id></element-citation></ref>
<ref id="b27-mco-0-0-1932"><label>27</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mutch</surname><given-names>J</given-names></name><name><surname>Lowe</surname><given-names>DG</given-names></name></person-group><article-title>Object class recognition and localization using sparse features with limited receptive fields</article-title><source>Int J Comput Vision</source><volume>80</volume><fpage>45</fpage><lpage>57</lpage><year>2008</year><pub-id pub-id-type="doi">10.1007/s11263-007-0118-0</pub-id></element-citation></ref>
<ref id="b28-mco-0-0-1932"><label>28</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Neal</surname><given-names>RM</given-names></name></person-group><article-title>Connectionist learning of belief networks</article-title><source>Artificial Intell</source><volume>56</volume><fpage>71</fpage><lpage>113</lpage><year>1992</year><pub-id pub-id-type="doi">10.1016/0004-3702(92)90065-6</pub-id></element-citation></ref>
<ref id="b29-mco-0-0-1932"><label>29</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ciresan</surname><given-names>D</given-names></name><name><surname>Meier</surname><given-names>U</given-names></name><name><surname>Masci</surname><given-names>J</given-names></name><name><surname>Gambardella</surname><given-names>L&#x039C;</given-names></name><name><surname>Schmidhuber</surname><given-names>J</given-names></name></person-group><article-title>Flexible, high performance convolutional neural networks for image classification</article-title><source>IJCAI Proc Int Joint Conf Artificial Intell</source><volume>22</volume><fpage>1237</fpage><lpage>1242</lpage><year>2011</year></element-citation></ref>
<ref id="b30-mco-0-0-1932"><label>30</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Scherer</surname><given-names>D</given-names></name><name><surname>M&#x00FC;ller</surname><given-names>A</given-names></name><name><surname>Behnke</surname><given-names>S</given-names></name></person-group><chapter-title>Evaluation of pooling operations in convolutional architectures for object recognition. In: Artificial Neural Networks (ICANN) 2010</chapter-title><source>Lecture Notes in Computer Science</source><person-group person-group-type="editor"><name><surname>Diamantaras</surname><given-names>K</given-names></name><name><surname>Duch</surname><given-names>W</given-names></name><name><surname>Iliadis</surname><given-names>LS</given-names></name></person-group><publisher-name>Springer</publisher-name><publisher-loc>Berlin</publisher-loc><fpage>92</fpage><lpage>101</lpage><year>2010</year><pub-id pub-id-type="doi">10.1007/978-3-642-15825-4_10</pub-id></element-citation></ref>
<ref id="b31-mco-0-0-1932"><label>31</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname><given-names>FJ</given-names></name><name><surname>LeCun</surname><given-names>Y</given-names></name></person-group><article-title>Large-scale learning with SVM and convolutional for generic object categorization. Computer vision and pattern recognition</article-title><source>IEEE Comput Soc Conf</source><volume>1</volume><fpage>284</fpage><lpage>291</lpage><year>2006</year></element-citation></ref>
<ref id="b32-mco-0-0-1932"><label>32</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Jarrett</surname><given-names>K</given-names></name><name><surname>Kavukcuoglu</surname><given-names>K</given-names></name><name><surname>Ranzato</surname><given-names>MA</given-names></name><name><surname>LeCun</surname><given-names>Y</given-names></name></person-group><chapter-title>What is the best multi-stage architecture for object recognition?</chapter-title><source>2009 IEEE 12th International Conference on Computer Vision</source><publisher-name>ICCV 2009</publisher-name><publisher-loc>Kyoto, Japan</publisher-loc><fpage>2146</fpage><lpage>2153</lpage><year>2009</year><pub-id pub-id-type="doi">10.1109/ICCV.2009.5459469</pub-id></element-citation></ref>
<ref id="b33-mco-0-0-1932"><label>33</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Zheng</surname><given-names>Y</given-names></name><name><surname>Liu</surname><given-names>Q</given-names></name><name><surname>Chen</surname><given-names>E</given-names></name><name><surname>Ge</surname><given-names>Y</given-names></name><name><surname>Zhao</surname><given-names>JL</given-names></name></person-group><chapter-title>Time series classification using multi-channels deep convolutional neural networks</chapter-title><person-group person-group-type="editor"><name><surname>Li</surname><given-names>F</given-names></name><name><surname>Li</surname><given-names>G</given-names></name><name><surname>Hwang</surname><given-names>S</given-names></name><name><surname>Yao</surname><given-names>B</given-names></name><name><surname>Zhang</surname><given-names>Z</given-names></name></person-group><publisher-name>Web-Age Information Management, WAIM 2014. Springer</publisher-name><publisher-loc>Cham</publisher-loc><fpage>298</fpage><lpage>310</lpage><year>2014</year></element-citation></ref>
<ref id="b34-mco-0-0-1932"><label>34</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mnih</surname><given-names>V</given-names></name><name><surname>Kavukcuoglu</surname><given-names>K</given-names></name><name><surname>Silver</surname><given-names>D</given-names></name><name><surname>Rusu</surname><given-names>AA</given-names></name><name><surname>Veness</surname><given-names>J</given-names></name><name><surname>Bellemare</surname><given-names>MG</given-names></name><name><surname>Graves</surname><given-names>A</given-names></name><name><surname>Riedmiller</surname><given-names>M</given-names></name><name><surname>Fidjeland</surname><given-names>AK</given-names></name><name><surname>Ostrovski</surname><given-names>G</given-names></name><etal/></person-group><article-title>Human-level control through deep reinforcement learning</article-title><source>Nature</source><volume>518</volume><fpage>529</fpage><lpage>533</lpage><year>2015</year><pub-id pub-id-type="doi">10.1038/nature14236</pub-id><pub-id pub-id-type="pmid">25719670</pub-id></element-citation></ref>
<ref id="b35-mco-0-0-1932"><label>35</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Szegedy</surname><given-names>C</given-names></name><name><surname>Liu</surname><given-names>W</given-names></name><name><surname>Jia</surname><given-names>Y</given-names></name><name><surname>Sermanet</surname><given-names>P</given-names></name><name><surname>Reed</surname><given-names>S</given-names></name><name><surname>Anguelov</surname><given-names>D</given-names></name><etal/></person-group><article-title>Going deeper with convolutions</article-title><source>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source><fpage>1</fpage><lpage>9</lpage><year>2015</year></element-citation></ref>
<ref id="b36-mco-0-0-1932"><label>36</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Glorot</surname><given-names>X</given-names></name><name><surname>Bordes</surname><given-names>A</given-names></name><name><surname>Bengio</surname><given-names>Y</given-names></name></person-group><article-title>Deep sparse rectifier neural networks</article-title><source>Proc Fourteenth Int Conf Artificial Intell Stat</source><fpage>315</fpage><lpage>323</lpage><year>2011</year></element-citation></ref>
<ref id="b37-mco-0-0-1932"><label>37</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nair</surname><given-names>V</given-names></name><name><surname>Hinton</surname><given-names>G</given-names></name></person-group><article-title>Rectified linear units improve restricted Boltzmann machines</article-title><source>Proc Int Conf Mach Lean</source><fpage>807</fpage><lpage>814</lpage><year>2010</year></element-citation></ref>
<ref id="b38-mco-0-0-1932"><label>38</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Krizhevsky</surname><given-names>A</given-names></name><name><surname>Sutskever</surname><given-names>I</given-names></name><name><surname>Hinton</surname><given-names>GE</given-names></name></person-group><article-title>Imagenet classification with deep convolutional neural networks</article-title><source>Adv Neural Inf Proc Syst</source><fpage>1097</fpage><lpage>1105</lpage><year>2012</year></element-citation></ref>
<ref id="b39-mco-0-0-1932"><label>39</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Bridle</surname><given-names>JS</given-names></name></person-group><chapter-title>Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition</chapter-title><source>Neurocomputing</source><person-group person-group-type="editor"><name><surname>Souli&#x00E9;</surname><given-names>FF</given-names></name><name><surname>H&#x00E9;rault</surname><given-names>J</given-names></name></person-group><publisher-name>Springer</publisher-name><publisher-loc>Berlin</publisher-loc><year>1990</year><pub-id pub-id-type="doi">10.1007/978-3-642-76153-9_28</pub-id></element-citation></ref>
<ref id="b40-mco-0-0-1932"><label>40</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>He</surname><given-names>K</given-names></name><name><surname>Zhang</surname><given-names>X</given-names></name><name><surname>Ren</surname><given-names>S</given-names></name><name><surname>Sun</surname><given-names>J</given-names></name></person-group><article-title>Deep residual learning for image recognition</article-title><source>arXiv: 1512.03385</source><year>2015</year></element-citation></ref>
<ref id="b41-mco-0-0-1932"><label>41</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Russakovsky</surname><given-names>O</given-names></name><name><surname>Deng</surname><given-names>J</given-names></name><name><surname>Su</surname><given-names>H</given-names></name><name><surname>Krause</surname><given-names>J</given-names></name><name><surname>Satheesh</surname><given-names>S</given-names></name><name><surname>Ma</surname><given-names>S</given-names></name><name><surname>Huang</surname><given-names>Z</given-names></name><name><surname>Karpathy</surname><given-names>A</given-names></name><name><surname>Khosla</surname><given-names>A</given-names></name><name><surname>Bernstein</surname><given-names>M</given-names></name><name><surname>Berg</surname><given-names>AC</given-names></name></person-group><article-title>Imagenet large scale visual recognition challenge</article-title><source>Int J Comput Vision</source><volume>115</volume><fpage>211</fpage><lpage>252</lpage><year>2015</year><pub-id pub-id-type="doi">10.1007/s11263-015-0816-y</pub-id></element-citation></ref>
<ref id="b42-mco-0-0-1932"><label>42</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Lin</surname></name><name><surname>Tsung-Yi</surname></name><name><surname>Michael</surname><given-names>Maire</given-names></name><name><surname>Serge</surname><given-names>Belongie</given-names></name><name><surname>James</surname><given-names>Hays</given-names></name><name><surname>Pietro</surname><given-names>Perona</given-names></name><name><surname>Deva</surname><given-names>Ramanan</given-names></name><name><surname>Piotr</surname><given-names>Doll&#x00E1;r</given-names></name><name><surname>C. Lawrence</surname><given-names>Zitnick</given-names></name></person-group><chapter-title>Microsoft coco: Common objects in context</chapter-title><source>In European conference on computer vision</source><fpage>740</fpage><lpage>755</lpage><publisher-name>Springer</publisher-name><publisher-loc>Cham</publisher-loc><year>2014</year></element-citation></ref>
<ref id="b43-mco-0-0-1932"><label>43</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kohavi</surname><given-names>R</given-names></name></person-group><article-title>A study of cross-validation and bootstrap for accuracy estimation and model selection</article-title><source>Proc Int Joint Conf Artificial Intell</source><volume>2</volume><fpage>1137</fpage><lpage>1143</lpage><year>1995</year></element-citation></ref>
<ref id="b44-mco-0-0-1932"><label>44</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Schaffer</surname><given-names>C</given-names></name></person-group><article-title>Selecting a classification method by cross-validation</article-title><source>Mach Lean</source><volume>13</volume><fpage>135</fpage><lpage>143</lpage><year>1993</year><pub-id pub-id-type="doi">10.1007/BF00993106</pub-id></element-citation></ref>
<ref id="b45-mco-0-0-1932"><label>45</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Refaeilzadeh</surname><given-names>P</given-names></name><name><surname>Tang</surname><given-names>L</given-names></name><name><surname>Liu</surname><given-names>H</given-names></name></person-group><chapter-title>Cross-validation</chapter-title><source>Encyclopedia of Database Systems</source><person-group person-group-type="editor"><name><surname>Liu</surname><given-names>L</given-names></name><name><surname>&#x00D6;zsu</surname><given-names>MT</given-names></name></person-group><publisher-name>Springer</publisher-name><publisher-loc>New York</publisher-loc><year>2009</year></element-citation></ref>
<ref id="b46-mco-0-0-1932"><label>46</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname><given-names>L</given-names></name><name><surname>Chen</surname><given-names>H</given-names></name><name><surname>Dou</surname><given-names>Q</given-names></name><name><surname>Qin</surname><given-names>J</given-names></name><name><surname>Heng</surname><given-names>PA</given-names></name></person-group><article-title>Automated melanoma recognition in dermoscopy images via very deep residual networks</article-title><source>IEEE Trans Med Imaging</source><volume>36</volume><fpage>994</fpage><lpage>1004</lpage><year>2017</year><pub-id pub-id-type="doi">10.1109/TMI.2016.2642839</pub-id><pub-id pub-id-type="pmid">28026754</pub-id></element-citation></ref>
<ref id="b47-mco-0-0-1932"><label>47</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Caruana</surname><given-names>R</given-names></name><name><surname>Lawrence</surname><given-names>S</given-names></name><name><surname>Giles</surname><given-names>CL</given-names></name></person-group><article-title>Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping</article-title><source>Adv Neural Inf Proc Syst</source><fpage>402</fpage><lpage>408</lpage><year>2001</year></element-citation></ref>
<ref id="b48-mco-0-0-1932"><label>48</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Baum</surname><given-names>EB</given-names></name><name><surname>Haussler</surname><given-names>D</given-names></name></person-group><article-title>What size net gives valid generalization?</article-title><source>Neural Comput</source><volume>1</volume><fpage>151</fpage><lpage>160</lpage><year>1989</year><pub-id pub-id-type="doi">10.1162/neco.1989.1.1.151</pub-id></element-citation></ref>
<ref id="b49-mco-0-0-1932"><label>49</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Geman</surname><given-names>S</given-names></name><name><surname>Bienenstock</surname><given-names>E</given-names></name><name><surname>Doursat</surname><given-names>R</given-names></name></person-group><article-title>Neural networks and the bias/variance dilemma</article-title><source>Neural Comput</source><volume>4</volume><fpage>1</fpage><lpage>58</lpage><year>1992</year><pub-id pub-id-type="doi">10.1162/neco.1992.4.1.1</pub-id></element-citation></ref>
<ref id="b50-mco-0-0-1932"><label>50</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Krogh</surname><given-names>A</given-names></name><name><surname>Hertz</surname><given-names>JA</given-names></name></person-group><article-title>A simple weight decay can improve generalization</article-title><source>Adv Neural Inf Proc Syst</source><volume>4</volume><fpage>950</fpage><lpage>957</lpage><year>1992</year></element-citation></ref>
<ref id="b51-mco-0-0-1932"><label>51</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Moody</surname><given-names>JE</given-names></name></person-group><article-title>The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems</article-title><source>Adv Neural Inf Proc Syst</source><volume>4</volume><fpage>847</fpage><lpage>854</lpage><year>1992</year></element-citation></ref>
<ref id="b52-mco-0-0-1932"><label>52</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname><given-names>J</given-names></name></person-group><article-title>A coefficient of agreement for nominal scales</article-title><source>Educ Psychol Meas</source><volume>20</volume><fpage>37</fpage><lpage>46</lpage><year>1960</year><pub-id pub-id-type="doi">10.1177/001316446002000104</pub-id></element-citation></ref>
<ref id="b53-mco-0-0-1932"><label>53</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Youden</surname><given-names>WJ</given-names></name></person-group><article-title>Index for rating diagnostic tests</article-title><source>Cancer</source><volume>3</volume><fpage>32</fpage><lpage>35</lpage><year>1950</year><pub-id pub-id-type="doi">10.1002/1097-0142(1950)3:1&#x003C;32::AID-CNCR2820030106&#x003E;3.0.CO;2-3</pub-id><pub-id pub-id-type="pmid">15405679</pub-id></element-citation></ref>
<ref id="b54-mco-0-0-1932"><label>54</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>McHugh</surname><given-names>ML</given-names></name></person-group><article-title>Interrater reliability: the kappa statistic</article-title><source>Biochem Med (Zagreb)</source><volume>22</volume><fpage>276</fpage><lpage>282</lpage><year>2012</year><pub-id pub-id-type="doi">10.11613/BM.2012.031</pub-id><pub-id pub-id-type="pmid">23092060</pub-id></element-citation></ref>
<ref id="b55-mco-0-0-1932"><label>55</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Miyagi</surname><given-names>Y</given-names></name><name><surname>Fujiwara</surname><given-names>K</given-names></name><name><surname>Oda</surname><given-names>T</given-names></name><name><surname>Miyake</surname><given-names>T</given-names></name><name><surname>Coleman</surname><given-names>RL</given-names></name></person-group><article-title>Development of new method for the prediction of clinical trial results using compressive sensing of artificial intelligence</article-title><source>J Biostat Biometric</source><volume>3</volume><fpage>202</fpage><year>2018</year></element-citation></ref>
<ref id="b56-mco-0-0-1932"><label>56</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Abbod</surname><given-names>MF</given-names></name><name><surname>Catto</surname><given-names>JW</given-names></name><name><surname>Linkens</surname><given-names>DA</given-names></name><name><surname>Hamdy</surname><given-names>FC</given-names></name></person-group><article-title>Application of artificial intelligence to the management of urological cancer</article-title><source>J Urol</source><volume>178</volume><fpage>1150</fpage><lpage>1156</lpage><year>2007</year><pub-id pub-id-type="doi">10.1016/j.juro.2007.05.122</pub-id><pub-id pub-id-type="pmid">17698099</pub-id></element-citation></ref>
<ref id="b57-mco-0-0-1932"><label>57</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Litjens</surname><given-names>G</given-names></name><name><surname>S&#x00E1;nchez</surname><given-names>CI</given-names></name><name><surname>Timofeeva</surname><given-names>N</given-names></name><name><surname>Hermsen</surname><given-names>M</given-names></name><name><surname>Nagtegaal</surname><given-names>I</given-names></name><name><surname>Kovacs</surname><given-names>I</given-names></name><name><surname>Hulsbergen-van de Kaa</surname><given-names>C</given-names></name><name><surname>Bult</surname><given-names>P</given-names></name><name><surname>van Ginneken</surname><given-names>B</given-names></name><name><surname>van der Laak</surname><given-names>J</given-names></name></person-group><article-title>Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis</article-title><source>Sci Rep</source><volume>6</volume><fpage>26286</fpage><year>2016</year><pub-id pub-id-type="doi">10.1038/srep26286</pub-id><pub-id pub-id-type="pmid">27212078</pub-id></element-citation></ref>
<ref id="b58-mco-0-0-1932"><label>58</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ortiz</surname><given-names>A</given-names></name><name><surname>Munilla</surname><given-names>J</given-names></name><name><surname>G&#x00F3;rriz</surname><given-names>JM</given-names></name><name><surname>Ram&#x00ED;rez</surname><given-names>J</given-names></name></person-group><article-title>Ensembles of deep learning architectures for the early diagnosis of the Alzheimer&#x0027;s disease</article-title><source>Int J Neural Syst</source><volume>26</volume><fpage>1650025</fpage><year>2016</year><pub-id pub-id-type="doi">10.1142/S0129065716500258</pub-id><pub-id pub-id-type="pmid">27478060</pub-id></element-citation></ref>
<ref id="b59-mco-0-0-1932"><label>59</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gil</surname><given-names>D</given-names></name><name><surname>Johnsson</surname><given-names>M</given-names></name><name><surname>Chamizo</surname><given-names>JMG</given-names></name><name><surname>Paya</surname><given-names>AS</given-names></name><name><surname>Fernandez</surname><given-names>DR</given-names></name></person-group><article-title>Application of artificial neural networks in the diagnosis of urological disfunctions</article-title><source>Expert Syst Appl</source><volume>36</volume><fpage>5754</fpage><lpage>5760</lpage><year>2009</year><pub-id pub-id-type="doi">10.1016/j.eswa.2008.06.065</pub-id></element-citation></ref>
<ref id="b60-mco-0-0-1932"><label>60</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sim&#x00F5;es</surname><given-names>PW</given-names></name><name><surname>Izumi</surname><given-names>NB</given-names></name><name><surname>Casagrande</surname><given-names>RS</given-names></name><name><surname>Venson</surname><given-names>R</given-names></name><name><surname>Veronezi</surname><given-names>CD</given-names></name><name><surname>Moretti</surname><given-names>GP</given-names></name><name><surname>da Rocha</surname><given-names>EL</given-names></name><name><surname>Cechinel</surname><given-names>C</given-names></name><name><surname>Ceretta</surname><given-names>LB</given-names></name><name><surname>Comunello</surname><given-names>E</given-names></name><etal/></person-group><article-title>Classification of images acquired with colposcopy using artificial neural networks</article-title><source>Cancer Inform</source><volume>13</volume><fpage>119</fpage><lpage>124</lpage><year>2014</year><pub-id pub-id-type="doi">10.4137/CIN.S17948</pub-id><pub-id pub-id-type="pmid">25374454</pub-id></element-citation></ref>
<ref id="b61-mco-0-0-1932"><label>61</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sato</surname><given-names>M</given-names></name><name><surname>Horie</surname><given-names>K</given-names></name><name><surname>Hara</surname><given-names>A</given-names></name><name><surname>Miyamoto</surname><given-names>Y</given-names></name><name><surname>Kurihara</surname><given-names>K</given-names></name><name><surname>Tomio</surname><given-names>K</given-names></name><name><surname>Yokota</surname><given-names>H</given-names></name></person-group><article-title>Application of deep learning to the classification of images from colposcopy</article-title><source>Oncol Lett</source><volume>15</volume><fpage>3518</fpage><lpage>3523</lpage><year>2018</year><pub-id pub-id-type="pmid">29456725</pub-id></element-citation></ref>
<ref id="b62-mco-0-0-1932"><label>62</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Trebeschi</surname><given-names>S</given-names></name><name><surname>van Griethuysen</surname><given-names>JJM</given-names></name><name><surname>Lambregts</surname><given-names>DMJ</given-names></name><name><surname>Lahaye</surname><given-names>MJ</given-names></name><name><surname>Parmar</surname><given-names>C</given-names></name><name><surname>Bakers</surname><given-names>FCH</given-names></name><name><surname>Peters</surname><given-names>NHGM</given-names></name><name><surname>Beets-Tan</surname><given-names>RGH</given-names></name><name><surname>Aerts</surname><given-names>HJWL</given-names></name></person-group><article-title>Deep learning for fully-automated localization and segmentation of rectal cancer on multiparametric MR</article-title><source>Sci Rep</source><volume>7</volume><fpage>5301</fpage><year>2017</year><pub-id pub-id-type="doi">10.1038/s41598-017-05728-9</pub-id><pub-id pub-id-type="pmid">28706185</pub-id></element-citation></ref>
<ref id="b63-mco-0-0-1932"><label>63</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Olczak</surname><given-names>J</given-names></name><name><surname>Fahlberg</surname><given-names>N</given-names></name><name><surname>Maki</surname><given-names>A</given-names></name><name><surname>Razavian</surname><given-names>AS</given-names></name><name><surname>Jilert</surname><given-names>A</given-names></name><name><surname>Stark</surname><given-names>A</given-names></name><name><surname>Sk&#x00F6;ldenberg</surname><given-names>O</given-names></name><name><surname>Gordon</surname><given-names>M</given-names></name></person-group><article-title>Artificial intelligence for analyzing orthopedic trauma radiographs</article-title><source>Acta Orthop</source><volume>88</volume><fpage>581</fpage><lpage>586</lpage><year>2017</year><pub-id pub-id-type="doi">10.1080/17453674.2017.1344459</pub-id><pub-id pub-id-type="pmid">28681679</pub-id></element-citation></ref>
<ref id="b64-mco-0-0-1932"><label>64</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Khosravi</surname><given-names>P</given-names></name><name><surname>Kazemi</surname><given-names>E</given-names></name><name><surname>Zhan</surname><given-names>Q</given-names></name><name><surname>Toschi</surname><given-names>M</given-names></name><name><surname>Makmsten</surname><given-names>J</given-names></name><name><surname>Hickman</surname><given-names>C</given-names></name><name><surname>Meseguer</surname><given-names>M</given-names></name><name><surname>Rosenwaks</surname><given-names>Z</given-names></name><name><surname>Elemento</surname><given-names>O</given-names></name><name><surname>Zaninovic</surname><given-names>N</given-names></name><name><surname>Hajirasouliha</surname><given-names>I</given-names></name></person-group><article-title>Robust automated assessment of human blastocyst quality using deep learning</article-title><source>bioRxiv 394882</source><year>2018</year></element-citation></ref>
<ref id="b65-mco-0-0-1932"><label>65</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Miyagi</surname><given-names>Y</given-names></name><name><surname>Habara</surname><given-names>T</given-names></name><name><surname>Hirata</surname><given-names>R</given-names></name><name><surname>Hayashi</surname><given-names>N</given-names></name></person-group><article-title>Feasibility of artificial intelligence for predicting live birth without aneuploidy from a blastocyst image</article-title><source>Reprod Med Biol</source><volume>18</volume><fpage>204</fpage><lpage>211</lpage><year>2019</year><pub-id pub-id-type="doi">10.1002/rmb2.12284</pub-id><pub-id pub-id-type="pmid">30996684</pub-id></element-citation></ref>
<ref id="b66-mco-0-0-1932"><label>66</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Miyagi</surname><given-names>Y</given-names></name><name><surname>Habara</surname><given-names>T</given-names></name><name><surname>Hirata</surname><given-names>R</given-names></name><name><surname>Hayashi</surname><given-names>N</given-names></name></person-group><article-title>Feasibility of deep learning for predicting live birth from a blastocyst image in patients classified by age</article-title><source>Reprod Med Biol</source><volume>18</volume><fpage>190</fpage><lpage>203</lpage><year>2019</year><pub-id pub-id-type="doi">10.1002/rmb2.12284</pub-id><pub-id pub-id-type="pmid">30996683</pub-id></element-citation></ref>
<ref id="b67-mco-0-0-1932"><label>67</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sideri</surname><given-names>M</given-names></name><name><surname>Garutti</surname><given-names>P</given-names></name><name><surname>Costa</surname><given-names>S</given-names></name><name><surname>Cristiani</surname><given-names>P</given-names></name><name><surname>Schincaglia</surname><given-names>P</given-names></name><name><surname>Sassoli de Bianchi</surname><given-names>P</given-names></name><name><surname>Naldoni</surname><given-names>C</given-names></name><name><surname>Bucchi</surname><given-names>L</given-names></name></person-group><article-title>Accuracy of colposcopically directed biopsy: Results from an online quality assurance programme for colposcopy in a population-based cervical screening setting in Italy</article-title><source>BioMed Res Int</source><volume>2015</volume><fpage>614035</fpage><year>2015</year><pub-id pub-id-type="doi">10.1155/2015/614035</pub-id><pub-id pub-id-type="pmid">26180805</pub-id></element-citation></ref>
<ref id="b68-mco-0-0-1932"><label>68</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sideri</surname><given-names>M</given-names></name><name><surname>Spolti</surname><given-names>N</given-names></name><name><surname>Spinaci</surname><given-names>L</given-names></name><name><surname>Sanvito</surname><given-names>F</given-names></name><name><surname>Ribaldone</surname><given-names>R</given-names></name><name><surname>Surico</surname><given-names>N</given-names></name><name><surname>Bucchi</surname><given-names>L</given-names></name></person-group><article-title>Interobserver variability of colposcopic interpretations and consistency with final histologic results</article-title><source>J Low Genit Tract Dis</source><volume>8</volume><fpage>212</fpage><lpage>216</lpage><year>2004</year><pub-id pub-id-type="doi">10.1097/00128360-200407000-00009</pub-id><pub-id pub-id-type="pmid">15874866</pub-id></element-citation></ref>
<ref id="b69-mco-0-0-1932"><label>69</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Massad</surname><given-names>LS</given-names></name><name><surname>Jeronimo</surname><given-names>J</given-names></name><name><surname>Katki</surname><given-names>HA</given-names></name><name><surname>Schiffman</surname><given-names>M</given-names></name><collab collab-type="corp-author">National Institutes of Health/American Society for Colposcopy and Cervical Pathology Research Group</collab></person-group><article-title>The accuracy of colposcopic grading for detection of high grade cervical intraepithelial neoplasia</article-title><source>J Low Genit Tract Dis</source><volume>13</volume><fpage>137</fpage><lpage>144</lpage><year>2009</year><pub-id pub-id-type="doi">10.1097/LGT.0b013e31819308d4</pub-id><pub-id pub-id-type="pmid">19550210</pub-id></element-citation></ref>
<ref id="b70-mco-0-0-1932"><label>70</label><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>LeCun</surname><given-names>Y</given-names></name><name><surname>Haffner</surname><given-names>P</given-names></name><name><surname>Bottou</surname><given-names>L</given-names></name><name><surname>Bengio</surname><given-names>Y</given-names></name></person-group><chapter-title>Object recognition with gradient-based learning. In Shape, contour and grouping in computer vision</chapter-title><publisher-name>Springer</publisher-name><publisher-loc>Berlin, Heidelberg</publisher-loc><year>1999</year></element-citation></ref>
<ref id="b71-mco-0-0-1932"><label>71</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname><given-names>J</given-names></name><name><surname>Shen</surname><given-names>L</given-names></name><name><surname>Sun</surname><given-names>G</given-names></name></person-group><article-title>Squeeze-and-excitation networks</article-title><source>Proceedings of the IEEE conference on computer vision and pattern recognition</source><fpage>7132</fpage><lpage>7141</lpage><year>2018</year></element-citation></ref>
<ref id="b72-mco-0-0-1932"><label>72</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kudva</surname><given-names>V</given-names></name><name><surname>Prasad</surname><given-names>K</given-names></name><name><surname>Guruvare</surname><given-names>S</given-names></name></person-group><article-title>Automation of detection of cervical cancer using convolutional neural networks</article-title><source>Crit Rev Biomed Eng</source><volume>46</volume><fpage>135</fpage><lpage>145</lpage><year>2018</year><pub-id pub-id-type="doi">10.1615/CritRevBiomedEng.2018026019</pub-id><pub-id pub-id-type="pmid">30055530</pub-id></element-citation></ref>
<ref id="b73-mco-0-0-1932"><label>73</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Esteva</surname><given-names>A</given-names></name><name><surname>Kuprel</surname><given-names>B</given-names></name><name><surname>Novoa</surname><given-names>RA</given-names></name><name><surname>Ko</surname><given-names>J</given-names></name><name><surname>Swetter</surname><given-names>SM</given-names></name><name><surname>Blau</surname><given-names>HM</given-names></name><name><surname>Thrun</surname><given-names>S</given-names></name></person-group><article-title>Dermatologist-level classification of skin cancer with deep neural networks</article-title><source>Nature</source><volume>542</volume><fpage>115</fpage><lpage>118</lpage><year>2017</year><pub-id pub-id-type="doi">10.1038/nature21056</pub-id><pub-id pub-id-type="pmid">28117445</pub-id></element-citation></ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-mco-0-0-1932" position="float">
<label>Figure 1.</label>
<caption><p>The receiver-operating characteristic curve of the best classifier for predicting high-grade squamous intraepithelial lesions. The value of the area under the curve is 0.824&#x00B1;0.052 (mean &#x00B1; standard error) and the 95&#x0025; confidence interval ranged between 0.721&#x2013;0.928.</p></caption>
<graphic xlink:href="mco-11-06-0583-g00.tif"/>
</fig>
<table-wrap id="tI-mco-0-0-1932" position="float">
<label>Table I.</label>
<caption><p>Architectures of the top classifier that exhibited the highest accuracy.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Layers</th>
<th align="center" valign="bottom">Supplementations</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Convolution layer</td>
<td align="center" valign="top">Output channels; 64, Kernel size; 3&#x00D7;3</td>
</tr>
<tr>
<td align="left" valign="top">ReLU</td>
<td align="center" valign="top">N/A</td>
</tr>
<tr>
<td align="left" valign="top">Pooling layer</td>
<td align="center" valign="top">Kernel size; 2&#x00D7;2</td>
</tr>
<tr>
<td align="left" valign="top">Convolution layer</td>
<td align="center" valign="top">Output channels; 64, Kernel size; 3&#x00D7;3</td>
</tr>
<tr>
<td align="left" valign="top">ReLU</td>
<td align="center" valign="top">N/A</td>
</tr>
<tr>
<td align="left" valign="top">Pooling layer</td>
<td align="center" valign="top">Kernel size; 2&#x00D7;2</td>
</tr>
<tr>
<td align="left" valign="top">Flatten layer</td>
<td align="center" valign="top">N/A</td>
</tr>
<tr>
<td align="left" valign="top">Linear layer</td>
<td align="center" valign="top">Size; 2<sup>9</sup></td>
</tr>
<tr>
<td align="left" valign="top">ReLU</td>
<td align="center" valign="top">N/A</td>
</tr>
<tr>
<td align="left" valign="top">Linear layer</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">Softmax layer</td>
<td align="center" valign="top">N/A</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn1-mco-0-0-1932"><p>The convolutional neural network structures, which consisted of 11 layers of convolutional deep learning, were obtained. ReLU, rectified linear units.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tII-mco-0-0-1932" position="float">
<label>Table II.</label>
<caption><p>Charactersitics of the 330 patients that underwent colposcopy and biopsy by gynecologic oncologists.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Patient characteristics</th>
<th align="center" valign="bottom">Pathological HSIL (n=213)</th>
<th align="center" valign="bottom">Pathological LSIL (n=97)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Age (years)</td>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Mean &#x00B1; SD</td>
<td align="center" valign="top">31.66&#x00B1;5.01</td>
<td align="center" valign="top">33.75&#x00B1;8.94</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Median</td>
<td align="center" valign="top">32</td>
<td align="center" valign="top">33</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Range</td>
<td align="center" valign="top">19-46</td>
<td align="center" valign="top">19-62</td>
</tr>
<tr>
<td align="left" valign="top">HPV</td>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Type 16 positive</td>
<td align="center" valign="top">75</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Type 18 positive</td>
<td align="center" valign="top">5</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Type 16 and 18 positive</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Positive, but not type 16 or 18</td>
<td align="center" valign="top">123</td>
<td align="center" valign="top">33</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Negative</td>
<td align="center" valign="top">6</td>
<td align="center" valign="top">6</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Not available</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">54</td>
</tr>
<tr>
<td align="left" valign="top">Colposcopic diagnosis</td>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;HSIL</td>
<td align="center" valign="top">177</td>
<td align="center" valign="top">22</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;LSIL</td>
<td align="center" valign="top">32</td>
<td align="center" valign="top">70</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Cervicitis</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">5</td>
</tr>
<tr>
<td align="left" valign="top">&#x00A0;&#x00A0;Invasive cancer</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">0</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn2-mco-0-0-1932"><p>HSIL, high-grade squamous intraepithelial lesions; LSIL, low-grade squamous intraepithelial lesions; SD, standard deviation.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tIII-mco-0-0-1932" position="float">
<label>Table III.</label>
<caption><p>Comparison between gynecologic oncologists and the top classifier using deep learning.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="bottom">Variable</th>
<th align="center" valign="bottom">Gynecologic oncologists</th>
<th align="center" valign="bottom">AI</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Accuracy</td>
<td align="center" valign="top">0.797 (247/310)</td>
<td align="center" valign="top">0.823 (51/62)</td>
</tr>
<tr>
<td align="left" valign="top">Sensitivity</td>
<td align="center" valign="top">0.831 (177/213)</td>
<td align="center" valign="top">0.800 (36/45)</td>
</tr>
<tr>
<td align="left" valign="top">Specificity</td>
<td align="center" valign="top">0.773 (75/97)</td>
<td align="center" valign="top">0.882 (15/17)</td>
</tr>
<tr>
<td align="left" valign="top">Positive predictive value</td>
<td align="center" valign="top">0.889 (177/199)</td>
<td align="center" valign="top">0.947 (36/38)</td>
</tr>
<tr>
<td align="left" valign="top">Negative predictive value</td>
<td align="center" valign="top">0.686 (70/102)</td>
<td align="center" valign="top">0.625 (15/24)</td>
</tr>
<tr>
<td align="left" valign="top">Youden&#x0027;s J index</td>
<td align="center" valign="top">0.604</td>
<td align="center" valign="top">0.682</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn3-mco-0-0-1932"><p>Bracketed data indicates the number of corresponding selected cases/the number of relevant cases. AI, artificial intelligence.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tIV-mco-0-0-1932" position="float">
<label>Table IV.</label>
<caption><p>Conventional colposcopy diagnosis and pathological results of the test data set.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="center" valign="bottom" colspan="2">Conventional colposcopy diagnosis</th>
</tr>
<tr>
<th/>
<th align="center" valign="bottom" colspan="2"><hr/></th>
</tr>
<tr>
<th align="left" valign="bottom">Lesion type</th>
<th align="center" valign="bottom">HSIL</th>
<th align="center" valign="bottom">LSIL</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Pathological HSIL</td>
<td align="center" valign="top">39</td>
<td align="center" valign="top">6</td>
</tr>
<tr>
<td align="left" valign="top">Pathological LSIL</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">15</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn4-mco-0-0-1932"><p>Cohen&#x0027;s Kappa coefficient was 0.691, P&#x003C;0.0001. HSIL, high-grade squamous intraepithelial lesions; LSIL, low-grade squamous intraepithelial lesions; AI, artificial intelligence.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tV-mco-0-0-1932" position="float">
<label>Table V.</label>
<caption><p>AI colposcopy diagnosis and pathological result for test data set.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="center" valign="bottom" colspan="2">AI colposcopy diagnosis</th>
</tr>
<tr>
<th/>
<th align="center" valign="bottom" colspan="2"><hr/></th>
</tr>
<tr>
<th align="left" valign="bottom">Lesion type</th>
<th align="center" valign="bottom">HSIL</th>
<th align="center" valign="bottom">LSIL</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Pathological HSIL</td>
<td align="center" valign="top">36</td>
<td align="center" valign="top">9</td>
</tr>
<tr>
<td align="left" valign="top">Pathological LSIL</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">14</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn5-mco-0-0-1932"><p>Cohen&#x0027;s Kappa coefficient was 0.561, P&#x003C;0.0001. HSIL, high-grade squamous intraepithelial lesions; LSIL, low-grade squamous intraepithelial lesions; AI, artificial intelligence.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tVI-mco-0-0-1932" position="float">
<label>Table VI.</label>
<caption><p>Conventional colposcopy diagnosis and AI colposcopy diagnosis for test data set.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="center" valign="bottom" colspan="2">AI colposcopy diagnosis</th>
</tr>
<tr>
<th/>
<th align="center" valign="bottom" colspan="2"><hr/></th>
</tr>
<tr>
<th/>
<th align="center" valign="bottom">HSIL</th>
<th align="center" valign="bottom">LSIL</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Conventional colposcopy HSIL</td>
<td align="center" valign="top">32</td>
<td align="center" valign="top">9</td>
</tr>
<tr>
<td align="left" valign="top">Conventional colposcopy LSIL</td>
<td align="center" valign="top">7</td>
<td align="center" valign="top">14</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn6-mco-0-0-1932"><p>Cohen&#x0027;s Kappa coefficient was 0.437, P&#x003C;0.0005. HSIL, high-grade squamous intraepithelial lesions; LSIL, low-grade squamous intraepithelial lesions; AI, artificial intelligence.</p></fn>
</table-wrap-foot>
</table-wrap>
</floats-group>
</article>
