Skip to main content

Main menu

  • Home
  • Current issue
  • Past issues
  • Authors/reviewers
    • Instructions for authors
    • Submit a manuscript
    • Institutional open access agreements
    • Peer reviewer login
  • Alerts
  • Subscriptions
  • ERS Publications
    • European Respiratory Journal
    • ERJ Open Research
    • European Respiratory Review
    • Breathe
    • ERS Books
    • ERS publications home

User menu

  • Log in
  • Subscribe
  • Contact Us
  • My Cart

Search

  • Advanced search
  • ERS Publications
    • European Respiratory Journal
    • ERJ Open Research
    • European Respiratory Review
    • Breathe
    • ERS Books
    • ERS publications home

Login

European Respiratory Society

Advanced Search

  • Home
  • Current issue
  • Past issues
  • Authors/reviewers
    • Instructions for authors
    • Submit a manuscript
    • Institutional open access agreements
    • Peer reviewer login
  • Alerts
  • Subscriptions

Deep learning for pneumothorax diagnosis: a systematic review and meta-analysis

Takahiro Sugibayashi, Shannon L. Walston, Toshimasa Matsumoto, Yasuhito Mitsuyama, Yukio Miki, Daiju Ueda
European Respiratory Review 2023 32: 220259; DOI: 10.1183/16000617.0259-2022
Takahiro Sugibayashi
1Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shannon L. Walston
1Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Toshimasa Matsumoto
1Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
2Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yasuhito Mitsuyama
1Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yukio Miki
1Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Daiju Ueda
1Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
2Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Daiju Ueda
  • For correspondence: ai.labo.ocu@gmail.com
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Background Deep learning (DL), a subset of artificial intelligence (AI), has been applied to pneumothorax diagnosis to aid physician diagnosis, but no meta-analysis has been performed.

Methods A search of multiple electronic databases through September 2022 was performed to identify studies that applied DL for pneumothorax diagnosis using imaging. Meta-analysis via a hierarchical model to calculate the summary area under the curve (AUC) and pooled sensitivity and specificity for both DL and physicians was performed. Risk of bias was assessed using a modified Prediction Model Study Risk of Bias Assessment Tool.

Results In 56 of the 63 primary studies, pneumothorax was identified from chest radiography. The total AUC was 0.97 (95% CI 0.96–0.98) for both DL and physicians. The total pooled sensitivity was 84% (95% CI 79–89%) for DL and 85% (95% CI 73–92%) for physicians and the pooled specificity was 96% (95% CI 94–98%) for DL and 98% (95% CI 95–99%) for physicians. More than half of the original studies (57%) had a high risk of bias.

Conclusions Our review found the diagnostic performance of DL models was similar to that of physicians, although the majority of studies had a high risk of bias. Further pneumothorax AI research is needed.

Abstract

In this, the first systematic review and meta-analysis of pneumothorax diagnostic AIs, physicians and AI models showed comparable performance in diagnosing pneumothorax from chest radiographs. https://bit.ly/3JZeGN4

Introduction

Pneumothorax is defined as the presence of air in the pleural space, i.e. the space between the lungs and the chest wall [1, 2]. Pneumothorax is a common disease in the population, with an incidence of primary spontaneous pneumothorax of 7.4/100 000 per year in men and 1.2/100 000 per year in women and an incidence of secondary spontaneous pneumothorax of 6.3/100 000 per year in men and 2.0/100 000 per year in women [3]. In contrast to the benign clinical course of primary spontaneous pneumothorax, secondary spontaneous pneumothorax is a potentially life-threatening event [2]. Additionally, the recurrence rate is high: ∼30% in primary spontaneous pneumothorax and ∼40% in secondary spontaneous pneumothorax [4–7]. Pneumothorax is one of the conditions the American College of Radiology recommends should be communicated to the physician within minutes to avoid patient decompensation [8].

Chest radiography is the simplest and most common examination [9, 10], and pneumothorax is usually diagnosed in conjunction with the patient's history and clinical presentation [2]. Although errors or delays in diagnosis can harm the patient, the signs of pneumothorax on chest radiography are subtle and up to 20% of occult pneumothoraces are missed on examination [11]. One reason for this is said to be that the workload far exceeds the number of radiologists [12, 13], and computer-based approaches have been developed to assist physicians in their daily work and are expected to be an approach to prevent missed cases.

Deep learning (DL) is one of the fields of artificial intelligence (AI) and it has improved tremendously in the field of medical imaging [14, 15], and with it, the number of certified medical devices that can be used in clinical practice has been increasing [16]. It is formally defined as a computational model that consists of multiple processing layers and learns representations of data with multiple levels of abstraction [17]. In fact, given raw data, DL develops the representations needed for pattern recognition on its own and does not require domain expertise to design data structures or feature extractors [14, 17]. This feature of DL allows it to learn the features important for classification on its own, rather than being directed by a human. Thus, DL requires careful bias assessment and accumulation of original articles for model training and evaluation [18].

This study is a systematic review and meta-analysis of 63 studies on the application of DL to pneumothorax diagnosis, comparing the diagnostic performance of DL and physicians in each modality. Studies in which physicians’ pneumothorax diagnosis performance is supported by DL are examined separately. At this time, there are no meta-analyses of DL diagnosis for pneumothorax.

Methods

Study registration and guidelines

This systematic review was prospectively registered with PROSPERO (CRD42022351985). Our study followed the guidelines of the Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA) [19, 20]. Two authors performed all screening, data collection, applicability assessments and bias assessments in duplicate (T.S. and D.U.), and a third independent reviewer was consulted in the event of a disagreement (T.M.).

Search strategy and study selection

The search strategy for identifying studies that developed and/or validated a DL model for the purposes of pneumothorax diagnosis was developed with an information specialist. The search strategy was as follows: original articles that included the words or variations of “artificial intelligence” or “deep learning” or “neural networks” and also the word “pneumothorax” were included. Peer-reviewed studies in any language from inception to September 2022 from the following databases were evaluated: MEDLINE, Scopus, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), and the Institute of Electrical and Electronics Engineers and the Institution of Engineering and Technology IEEE Xplore. The titles and abstracts were screened prior to full-text screening. Studies were included if they were: primary research studies of pneumothorax diagnosis in humans that developed and/or validated a DL model. Any target population, study setting or comparator group was eligible. Studies were excluded if they were: conference abstracts or proceedings, letters to the editor, review articles, or segmentation or detection only studies. Excluded studies, including the reason for exclusion, were recorded in a PRISMA flow diagram (figure 1) [20].

FIGURE 1
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 1

Eligibility criteria. CENTRAL: Cochrane Central Register of Controlled Trials.

Data extraction

We extracted information including study design, sample size, comparator groups and numerical results into a predefined data sheet. Contingency tables were constructed using the available diagnostic performance information for each model. These were used to calculate summary area under the curve (AUC), sensitivity and specificity. All available contingency tables were included in the meta-analysis. The datasets involved in the development of a model were defined as the training set (for training the model), tuning set (for tuning hyperparameters) and validation test set (for estimating the performance of the model) [21].

Statistical analysis

We estimated the diagnostic performance of both the DL model and physicians by carrying out a meta-analysis with random effects of studies providing both internal and external validation contingency tables [22]. These contingency tables were used to construct hierarchical summary receiver operating characteristic (ROC) curves and to calculate pooled sensitivities and specificities, with the anticipation of a high level of heterogeneity [23]. Between-study heterogeneity was represented using the 95% prediction region of the hierarchical summary ROC curves. Statistical significance was defined as a p-value of <0.05. All calculations were performed using R version 4.0.0 with the metafor and meta4diag libraries [24].

Quality assessment

The Prediction Model Study Risk of Bias Assessment Tool (PROBAST) was used to assess bias and applicability of the included studies [18]. This tool evaluates bias across four domains (participants, predictors, outcomes and analysis) and then these domains are combined into an overall assessment. Our assessment of bias and applicability in the first domain was based on both the images used to develop the models and the patient population the models were tested on. We did not include domain 2 (predictors) in the assessment of bias or applicability. Details of modifications made to PROBAST are provided in supplementary table S1.

Publication bias

Publication bias was assessed using the effective sample size funnel plot described by Egger et al. [25].

Results

Study selection and characteristics

We identified 532 studies, of which 255 were duplicates. After screening, 63 studies were included in the systematic review and 32 studies were included in the meta-analysis (figure 1 and table 1). Among the 63 studies, 56 studies identified pneumothorax on chest radiography [26–81], four studies on computed tomography [82–85], one study on ECG [86], one study used chest radiography and photography using a smartphone [87], and one study used chest radiography and tabular data [88]. Six studies developed and internally tuned DLs [37, 52, 63, 67, 74, 76], 25 studies also internally tested their DLs [32, 33, 35, 38, 40, 41, 43, 45, 47, 48, 50, 55, 60, 65, 69, 70, 73, 75, 79–83, 85, 86] and 32 studies externally tested the DLs [26–31, 34, 36, 39, 42, 44, 46, 49, 51, 53, 54, 56–59, 61, 62, 64, 66, 68, 71, 72, 77, 78, 84, 87, 88].

View this table:
  • View inline
  • View popup
TABLE 1

Study characteristics

Five studies compared the performance of DL with physicians: two studies compared DL with experts (not a resident or technologist) [42, 83], two studies compared DL with both experts and non-experts [43, 68] and one study compared DL with non-experts only [34]. Two studies compared the performance of DL with radiology reports written in daily clinical practice [42, 56]. Two studies included physician performance with and without DL assistance as a comparison group [44, 68]. Detailed physician characteristics are shown in supplementary table S4.

As for model development, to generate a reference standard for image labelling, 18 studies used expert consensus [27–33, 35–38, 49, 53–55, 71, 77, 83], two relied on the opinion of a single expert reader [76, 85], 16 used pre-existing radiological reports or other imaging modalities [34, 41, 43, 45, 46, 52, 60, 61, 67, 75, 78–82, 87], one study defined their reference standard as surgical confirmation (indicated for surgery) [86], 11 studies used mixed methods (any combination of the aforementioned) [40, 47, 48, 50, 51, 62, 63, 65, 69, 70, 73] and two studies did not report how their reference standard was generated [74, 88]. As for model testing, to generate a reference standard for image labelling, 26 studies used expert consensus [26–28, 30–33, 38, 39, 44, 51, 54–57, 61, 64, 66, 68, 71–73, 77, 80, 83, 84], two relied on the opinion of a single expert reader [58, 85], 11 used pre-existing radiological reports or other imaging modalities [35, 40, 41, 48, 50, 60, 79, 81, 82, 87, 88], one study defined their reference standard as surgical confirmation (indicated for surgery) [86], 12 studies used mixed methods (any combination of the aforementioned) [29, 34, 36, 42, 43, 46, 47, 49, 53, 59, 65, 69] and five studies did not report how their reference standard was generated [45, 62, 70, 75, 78].

Study participants

There was large variation in the number of participants represented by each dataset (median (interquartile range (IQR)) 5288 (516–30 805); range 100–538 390) (supplementary table S2). The proportion of participants with pneumothorax in each dataset also ranged widely (median (IQR) 17.2% (10.8–25.0%)). 23 studies did not describe the sex of the study participants [27, 31–33, 36–38, 55, 59, 62, 65, 69–71, 73, 76–78, 81, 82, 86–88] and 24 studies did not include age information [27, 31–33, 36–38, 55, 59, 62, 65, 69–71, 73, 74, 76–78, 81, 82, 86–88]. Detailed dataset characteristics are shown in supplementary table S2.

Model development

The size of the training (median (IQR) 17 265 (8540–86 524)), tuning (median (IQR) 1598 (924–3468)) and test (median (IQR) 1684 (575–3107)) datasets at the patient level varied widely (table 1). Two out of 50 (4%) studies that developed a model did not report the size of each dataset separately [40, 69]. In studies that performed external model validation, the median dataset size was 1137 (range 175–112 120). 17 studies included localisation of pneumothorax in model output to improve end-user interpretability [26–28, 30–33, 36–38, 40, 47, 56, 59, 68, 84, 85]. Detailed DL characteristics are shown in supplementary table S3.

Quality assessment

PROBAST assessment led to an overall rating of 36 (57%) studies as high risk of bias (figure 2). The main contributing factors to this assessment were studies that did not perform external validation or internally validated models with small sample sizes. Five (8%) studies were judged to be at high risk of bias in the participant domain because of inclusion and exclusion criteria.

FIGURE 2
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 2

Summary of Prediction Model Study Risk of Bias Assessment Tool (PROBAST) risk of bias.

Meta-analysis

We extracted 89 contingency tables from 32 studies that provided sufficient information to calculate contingency tables for pneumothorax classification [27–36, 38, 39, 42–46, 48, 51, 53–56, 58, 61, 68, 70, 71, 75, 78, 80, 87]. 68 contingency tables were extracted for reported DL performance and 21 contingency tables were extracted for physician performance. Hierarchical summary ROC curves from the studies evaluating DL or physician performance of all studies are included in figure 3. The total AUC was 0.97 (95% CI 0.96–0.98) for DL and 0.97 (95% CI 0.96–0.98) for physicians. The total pooled sensitivity was 84% (95% CI 79–89%) for DL and 85% (95% CI 73–92%) for physicians and the pooled specificity was 96% (95% CI 94–98%) for DL and 98% (95% CI 95–99%) for physicians (table 2). Two studies reported physician performance with DL assistance and one study showed no significant difference with respect to specificity, but a moderate increase in sensitivity and an increase in accuracy. The other study showed no significant difference with respect to sensitivity or specificity, but a slight increase in accuracy [44, 68]. Accuracy, sensitivity and specificity changed from 92–99% to 97–99%, 67–94% to 85–96% and 100% to 99–100% before and after the use of DL, respectively (table 3).

FIGURE 3
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 3

Hierarchical summary receiver operating characteristic (ROC) curves for all studies: a) deep learning models (68 tables) and b) physicians (21 tables). AUC: area under the curve.

View this table:
  • View inline
  • View popup
TABLE 2

Pooled metrics in meta-analysis

View this table:
  • View inline
  • View popup
TABLE 3

Performance summary of the study physicians with deep learning (DL)

Publication bias

We assessed publication bias by regression analysis of the funnel plot to quantify asymmetry (supplementary figure S1) and it suggested a high risk of publication bias (p<0.05).

Discussion

In our meta-analysis of DL for pneumothorax diagnosis, DL and physician competence were comparable. The total AUC was 0.97 (95% CI 0.96–0.98) for DL and 0.97 (95% CI 0.96–0.98) for physicians. The total pooled sensitivity was 84% (95% CI 79–89%) for DL and 85% (95% CI 73–92%) for physicians and the pooled specificity was 96% (95% CI 94–98%) for DL and 98% (95% CI 95–99%) for physicians. To the best of our knowledge, this article is the first systematic review and meta-analysis of pneumothorax diagnostic DL.

We found data investigating two possible clinical uses of diagnostic DL for pneumothorax during our review: one is to use DL for triage and the other is to use DL as a second opinion. Although pneumothorax is regularly diagnosed in patients presenting to the emergency department, it is detected in a relatively low number of all radiographs performed. Hence, an easy and accurate screening tool is needed which may help prioritise patients coming to hospitals. In fact, one such study reports a reduction in the reporting delay for pneumothorax [30] and two studies reported that reading times were shorter with AI assisting the physician than with the physician alone [26, 53]. Additionally, in intensive care units, chest radiographs are frequently taken and their reading is often labour intensive. The support of AI is expected to both improve the speed of reading and reduce the total workload [89]. Two papers reported data on the use of DL to complement physicians’ decision making [44, 68]. The small number precludes meta-analysis, although one study showed no significant difference with respect to specificity, but a moderate increase in sensitivity and an increase in accuracy; the other study showed no significant difference with respect to sensitivity or specificity (table 3). Although further data related to the performance of physicians supported with DL is required, these two works support the results obtained from this meta-analysis. There is one AI report of chest ultrasound, which is considered more sensitive than chest radiography, and this is an area of promising future research [90]. AI models at present are most useful as a screening tool to determine the presence or absence of pneumothorax. It does not incorporate individual patient's circumstances and other medical conditions in the making of management decisions, especially regarding treatment and follow-up, as physicians currently have to do. Whether additional AIs to measure detailed features (e.g. size, evidence of tension, etc.) of the pneumothorax are warranted, and how they could best integrate patients’ clinical details, will be subjects for future studies.

Confounding factors in images can create bias in diagnostic imaging DL. About 10% (six out of 63) of the articles included in this study mentioned chest tubes as a confounding factor, indicating that DL can recognise tubes and that this can be a strong bias. In other words, a DL may recognise a chest drain as a therapeutic intervention for pneumothorax and diagnose pneumothorax. Although physicians may also use such medical device information to suspect the presence of pneumothorax, a DL with such biases may have poor performance for diagnosing pneumothorax prior to the intervention. Although the impact of such confounding factors may be small or large, their impact should be taken into account when evaluating the model for clinical use to prevent any detriment to the patient. There is a study which reported that the influence of such confounding factors can be avoided when training DL by using annotation of the pneumothorax cavity [73]. External devices (e.g. chest tubes, central lines or indwelling pleural catheters) and patient features (e.g. skin folds and thickened pleura) may impact the results. These aspects need to be investigated in future research.

In this study, we found that the majority of the included articles were at high risk of bias according to PROBAST [18]. One reason for this is that medical DL research is at the intersection between medicine and engineering, each with different concepts. There were both medical and engineering papers in our included articles. PROBAST is only one method of evaluating bias from a medical perspective. Of course, for medical researchers, medical DL must first adhere to the “do no harm” principle for patients [91]. Therefore, it is important to evaluate medical DL in various validation settings and this should be an important factor to prevent bias. On the other hand, DL, which benefits not only medicine but also many other fields, is a product of the accumulated knowledge of engineering researchers. For engineering researchers, one of the key factors is that the DL must perform better relative to prior research and the ingenuity to achieve this can be the novelty. From this aspect, it is reasonable to develop and validate the DL using the same open dataset as prior studies to demonstrate improved performance. Open datasets have contributed greatly to the development of DL because of their ease of use, the results are highly reproducible and comparisons with previous studies are easy to make even if there is bias in a clinical sense. In addition, patient privacy issues may make it difficult to access each hospital's data. In order to better use DL created by engineering research in medical practice, medical researchers must verify biases from various perspectives, understand the characteristics of DL, and conduct research that will benefit patients and reduce the daily clinical burden on physicians. Medical and engineering researchers should cooperate and share roles to advance medical care. About half (32 out of 63) of the papers included in this study were externally validated, which is the most important factor in the evaluation of AI. The high risk of bias in this study was largely influenced by the fact that 35 out of the 63 papers included risks of bias in the analysis portion of the study. It is likely that a more refined design of the analysis would allow for a low risk of bias study and also a better understanding of the pneumothorax diagnosis AI model.

The present study has several limitations. More than half (57%) of the included studies were classified as high risk of bias by PROBAST, limiting the conclusions that could be drawn from the meta-analysis. In addition, some papers lacked training and validation details, which contributed to the high risk of bias. Also, in terms of comparing model performance, it is reasonable that multiple studies used the same large open database for training and validation, but actual clinical practice will have a variety of different cohorts, which reduces the applicability of our conclusions to significantly different cohorts. Furthermore, publication bias also affected the results of this study.

In order to provide better medical care to patients and reduce the burden on physicians, pneumothorax diagnosis DL and physicians may complement each other to improve the accuracy of pneumothorax diagnosis in clinical practice. DL will be used in various medical fields in the future. Therefore, it is important to build up evidence by integrating individual original research and capturing overall characteristics through systematic review and meta-analysis.

Points for clinical practice

  • Use of AI as an adjunct to physicians’ diagnosis of pneumothorax may have potential benefits and deserves further exploration.

Questions for future research

  • How much improvement has been made in the performance of pneumothorax diagnosis by physicians with AI assistance?

  • To what extent do confounding factors inherent in chest radiographs impact pneumothorax diagnostic AIs?

Supplementary material

Supplementary Material

Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.

Supplementary material ERR-0259-2022.SUPPLEMENT

Footnotes

  • Provenance: Submitted article, peer reviewed.

  • Data availability: Study protocol and metadata are available from the corresponding author.

  • Conflict of interest: The authors have nothing to disclose.

  • Received December 23, 2022.
  • Accepted March 16, 2023.
  • Copyright ©The authors 2023
http://creativecommons.org/licenses/by-nc/4.0/

This version is distributed under the terms of the Creative Commons Attribution Non-Commercial Licence 4.0. For commercial reproduction rights and permissions contact permissions{at}ersnet.org

References

  1. ↵
    1. O'Connor AR,
    2. Morgan WE
    . Radiological review of pneumothorax. BMJ 2005; 330: 1493–1497. doi:10.1136/bmj.330.7506.1493
    OpenUrlFREE Full Text
  2. ↵
    1. Sahn SA,
    2. Heffner JE
    . Spontaneous pneumothorax. N Engl J Med 2000; 342: 868–874. doi:10.1056/NEJM200003233421207
    OpenUrlCrossRefPubMed
  3. ↵
    1. Melton LJ 3rd.,
    2. Hepper NG,
    3. Offord KP
    . Incidence of spontaneous pneumothorax in Olmsted County, Minnesota: 1950 to 1974. Am Rev Respir Dis 1979; 120: 1379–1382. doi:10.1164/arrd.1979.120.6.1379
    OpenUrlPubMed
  4. ↵
    1. Schramel FM,
    2. Postmus PE,
    3. Vanderschueren RG
    . Current aspects of spontaneous pneumothorax. Eur Respir J 1997; 10: 1372–1379. doi:10.1183/09031936.97.10061372
    OpenUrlAbstract/FREE Full Text
    1. Light RW,
    2. O'Hara VS,
    3. Moritz TE, et al.
    Intrapleural tetracycline for the prevention of recurrent spontaneous pneumothorax. Results of a Department of Veterans Affairs cooperative study. JAMA 1990; 264: 2224–2230. doi:10.1001/jama.1990.03450170072025
    OpenUrlCrossRefPubMed
    1. Lippert HL,
    2. Lund O,
    3. Blegvad S, et al.
    Independent risk factors for cumulative recurrence rate after first spontaneous pneumothorax. Eur Respir J 1991; 4: 324–331. doi:10.1183/09031936.93.04030324
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Videm V,
    2. Pillgram-Larsen J,
    3. Ellingsen O, et al.
    Spontaneous pneumothorax in chronic obstructive pulmonary disease: complications, treatment and recurrences. Eur J Respir Dis 1987; 71: 365–371.
    OpenUrlPubMed
  6. ↵
    1. Larson PA,
    2. Berland LL,
    3. Griffith B, et al.
    Actionable findings and the role of IT support: report of the ACR Actionable Reporting Work Group. J Am Coll Radiol 2014; 11: 552–558. doi:10.1016/j.jacr.2013.12.016
    OpenUrl
  7. ↵
    1. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR)
    . Effects of ionizing radiation. 2008. https://digitallibrary.un.org/record/692844 Date last accessed: 25 March 2023.
  8. ↵
    1. Mettler FA Jr.,
    2. Bhargavan M,
    3. Faulkner K, et al.
    Radiologic and nuclear medicine studies in the United States and worldwide: frequency, radiation dose, and comparison with other radiation sources – 1950–2007. Radiology 2009; 253: 520–531. doi:10.1148/radiol.2532082010
    OpenUrlCrossRefPubMed
  9. ↵
    1. Brar MS,
    2. Bains I,
    3. Brunet G, et al.
    Occult pneumothoraces truly occult or simply missed: redux. J Trauma 2010; 69: 1335–1337. doi:10.1097/TA.0b013e3181f6f525
    OpenUrlPubMed
  10. ↵
    1. Nakajima Y,
    2. Yamada K,
    3. Imamura K, et al.
    Radiologist supply and workload: international comparison – Working Group of Japanese College of Radiology. Radiat Med 2008; 26: 455–465. doi:10.1007/s11604-008-0259-2
    OpenUrlCrossRefPubMed
  11. ↵
    1. Rimmer A
    . Radiologist shortage leaves patient care at risk, warns royal college. BMJ 2017; 359: j4683. doi:10.1136/bmj.j4683
    OpenUrlFREE Full Text
  12. ↵
    1. Esteva A,
    2. Robicquet A,
    3. Ramsundar B, et al.
    A guide to deep learning in healthcare. Nat Med 2019; 25: 24–29. doi:10.1038/s41591-018-0316-z
    OpenUrlCrossRefPubMed
  13. ↵
    1. Ueda D,
    2. Shimazaki A,
    3. Miki Y
    . Technical and clinical overview of deep learning in radiology. Jpn J Radiol 2019; 37: 15–33. doi:10.1007/s11604-018-0795-3
    OpenUrl
  14. ↵
    1. Tadavarthi Y,
    2. Vey B,
    3. Krupinski E, et al.
    The state of radiology AI: considerations for purchase decisions and current market offerings. Radiol Artif Intell 2020; 2: e200004. doi:10.1148/ryai.2020200004
    OpenUrl
  15. ↵
    1. LeCun Y,
    2. Bengio Y,
    3. Hinton G
    . Deep learning. Nature 2015; 521: 436–444. doi:10.1038/nature14539
    OpenUrlCrossRefPubMed
  16. ↵
    1. Wolff RF,
    2. Moons KGM,
    3. Riley RD, et al.
    PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019; 170: 51–58. doi:10.7326/M18-1376
    OpenUrlCrossRefPubMed
  17. ↵
    1. McInnes MDF,
    2. Moher D,
    3. Thombs BD, et al.
    Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement. JAMA 2018; 319: 388–396. doi:10.1001/jama.2017.19163
    OpenUrlCrossRefPubMed
  18. ↵
    1. Moher D,
    2. Liberati A,
    3. Tetzlaff J, et al.
    Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009; 339: b2535. doi:10.1136/bmj.b2535
    OpenUrlFREE Full Text
  19. ↵
    1. Liu X,
    2. Faes L,
    3. Kale AU, et al.
    A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health 2019; 1: e271–e297. doi:10.1016/S2589-7500(19)30123-2
    OpenUrl
  20. ↵
    1. Jackson D,
    2. Turner R
    . Power analysis for random-effects meta-analysis. Res Synth Methods 2017; 8: 290–302. doi:10.1002/jrsm.1240
    OpenUrlPubMed
  21. ↵
    1. Macaskill P,
    2. Gatsonis C,
    3. Deeks J, et al.
    Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy. London, Cochrane, 2010.
  22. ↵
    1. R Core Team
    . R: A Language and Environment for Statistical Computing. Vienna, R Foundation for Statistical Computing, 2013.
  23. ↵
    1. Egger M,
    2. Davey Smith G,
    3. Schneider M, et al.
    Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629–634. doi:10.1136/bmj.315.7109.629
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Sung J,
    2. Park S,
    3. Lee SM, et al.
    Added value of deep learning-based detection system for multiple major findings on chest radiographs: a randomized crossover study. Radiology 2021; 299: 450–459. doi:10.1148/radiol.2021202818
    OpenUrl
  25. ↵
    1. Park S,
    2. Lee SM,
    3. Kim N, et al.
    Application of deep learning-based computer-aided detection system: detecting pneumothorax on chest radiograph after biopsy. Eur Radiol 2019; 29: 5341–5348. doi:10.1007/s00330-019-06130-x
    OpenUrlCrossRefPubMed
  26. ↵
    1. Rudolph J,
    2. Huemmer C,
    3. Ghesu FC, et al.
    Artificial intelligence in chest radiography reporting accuracy: added clinical value in the emergency unit setting without 24/7 radiology coverage. Invest Radiol 2022; 57: 90–98. doi:10.1097/RLI.0000000000000813
    OpenUrl
  27. ↵
    1. Taylor AG,
    2. Mielke C,
    3. Mongan J
    . Automated detection of moderate and large pneumothorax on frontal chest X-rays using deep convolutional neural networks: a retrospective study. PLoS Med 2018; 15: e1002697. doi:10.1371/journal.pmed.1002697
    OpenUrlCrossRefPubMed
  28. ↵
    1. Feng S,
    2. Liu Q,
    3. Patel A, et al.
    Automated pneumothorax triaging in chest X-rays in the New Zealand population using deep-learning algorithms. J Med Imaging Radiat Oncol 2022; 66: 1035–1043. doi:10.1111/1754-9485.13393
    OpenUrl
  29. ↵
    1. Kao CY,
    2. Lin CY,
    3. Chao CC, et al.
    Automated radiology alert system for pneumothorax detection on chest radiographs improves efficiency and diagnostic performance. Diagnostics 2021; 11: 1182. doi:10.3390/diagnostics11071182
    OpenUrl
  30. ↵
    1. Wang Q,
    2. Liu Q,
    3. Luo G, et al.
    Automated segmentation and diagnosis of pneumothorax on chest X-rays with fully convolutional multi-scale ScSE-DenseNet: a retrospective study. BMC Med Inform Decis Mak 2020; 20: Suppl. 14, 317. doi:10.1186/s12911-020-01325-5
    OpenUrl
  31. ↵
    1. Wang X,
    2. Yang S,
    3. Lan J, et al.
    Automatic segmentation of pneumothorax in chest radiographs based on a two-stage deep learning method. IEEE Trans Cognit Dev Syst 2022; 14: 205–218. doi:10.1109/TCDS.2020.3035572
    OpenUrl
  32. ↵
    1. Yi PH,
    2. Kim TK,
    3. Yu AC, et al.
    Can AI outperform a junior resident? Comparison of deep neural network to first-year radiology residents for identification of pneumothorax. Emerg Radiol 2020; 27: 367–375. doi:10.1007/s10140-020-01767-4
    OpenUrl
  33. ↵
    1. Majkowska A,
    2. Mittal S,
    3. Steiner DF, et al.
    Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology 2020; 294: 421–431. doi:10.1148/radiol.2019191293
    OpenUrlPubMed
  34. ↵
    1. Mosquera C,
    2. Diaz FN,
    3. Binder F, et al.
    Chest x-ray automated triage: a semiologic approach designed for clinical implementation, exploiting different types of labels through a combination of four deep learning architectures. Comput Methods Programs Biomed 2021; 206: 106130. doi:10.1016/j.cmpb.2021.106130
    OpenUrl
  35. ↵
    1. Abedalla A,
    2. Abdullah M,
    3. Al-Ayyoub M, et al.
    Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures. PeerJ Comput Sci 2021; 7: e607. doi:10.7717/peerj-cs.607
    OpenUrl
  36. ↵
    1. Wang H,
    2. Gu H,
    3. Qin P, et al.
    CheXLocNet: automatic localization of pneumothorax in chest radiographs using deep convolutional neural networks. PLoS One 2020; 15: e0242013. doi:10.1371/journal.pone.0242013
    OpenUrlCrossRef
  37. ↵
    1. Rudolph J,
    2. Schachtner B,
    3. Fink N, et al.
    Clinically focused multi-cohort benchmarking as a tool for external validation of artificial intelligence algorithm performance in basic chest radiography analysis. Sci Rep 2022; 12: 12764. doi:10.1038/s41598-022-16514-7
    OpenUrl
  38. ↵
    1. Zhou Y,
    2. Zhou T,
    3. Zhou T, et al.
    Contrast-attentive thoracic disease recognition with dual-weighting graph reasoning. IEEE Trans Med Imaging 2021; 40: 1196–1206. doi:10.1109/TMI.2021.3049498
    OpenUrl
  39. ↵
    1. Haq NF,
    2. Moradi M,
    3. Wang ZJ
    . A deep community based approach for large scale content based X-ray image retrieval. Med Image Anal 2021; 68: 101847. doi:10.1016/j.media.2020.101847
    OpenUrl
  40. ↵
    1. Hwang EJ,
    2. Hong JH,
    3. Lee KH, et al.
    Deep learning algorithm for surveillance of pneumothorax after lung biopsy: a multicenter diagnostic cohort study. Eur Radiol 2020; 30: 3660–3671. doi:10.1007/s00330-020-06771-3
    OpenUrlCrossRefPubMed
  41. ↵
    1. Rajpurkar P,
    2. Irvin J,
    3. Ball RL, et al.
    Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med 2018; 15: e1002686. doi:10.1371/journal.pmed.1002686
    OpenUrlCrossRefPubMed
  42. ↵
    1. Hong W,
    2. Hwang EJ,
    3. Lee JH, et al.
    Deep learning for detecting pneumothorax on chest radiographs after needle biopsy: clinical implementation. Radiology 2022; 303: 433–441. doi:10.1148/radiol.211706
    OpenUrl
  43. ↵
    1. Kim S,
    2. Rim B,
    3. Choi S, et al.
    Deep learning in multi-class lung diseases’ classification on chest X-ray images. Diagnostics 2022; 12: 915. doi:10.3390/diagnostics12040915
    OpenUrl
  44. ↵
    1. Thian YL,
    2. Ng D,
    3. Hallinan J, et al.
    Deep learning systems for pneumothorax detection on chest radiographs: a multicenter external validation study. Radiol Artif Intell 2021; 3: e200190. doi:10.1148/ryai.2021200190
    OpenUrl
  45. ↵
    1. Park S,
    2. Lee SM,
    3. Lee KH, et al.
    Deep learning-based detection system for multiclass lesions on chest radiographs: comparison with observer readings. Eur Radiol 2020; 30: 1359–1368. doi:10.1007/s00330-019-06532-x
    OpenUrl
  46. ↵
    1. Tian Y,
    2. Wang J,
    3. Yang W, et al.
    Deep multi-instance transfer learning for pneumothorax classification in chest X-ray images. Med Phys 2022; 49: 231–243. doi:10.1002/mp.15328
    OpenUrl
  47. ↵
    1. Niehues SM,
    2. Adams LC,
    3. Gaudin RA, et al.
    Deep-learning-based diagnosis of bedside chest X-ray in intensive care and emergency medicine. Invest Radiol 2021; 56: 525–534. doi:10.1097/RLI.0000000000000771
    OpenUrl
  48. ↵
    1. Liang X,
    2. Peng C,
    3. Qiu B, et al.
    Dense networks with relative location awareness for thorax disease identification. Med Phys 2019; 46: 2064–2073. doi:10.1002/mp.13516
    OpenUrl
  49. ↵
    1. Hallinan J,
    2. Feng M,
    3. Ng D, et al.
    Detection of pneumothorax with deep learning models: learning from radiologist labels vs natural language processing model generated labels. Acad Radiol 2022; 29: 1350–1358. doi:10.1016/j.acra.2021.09.013
    OpenUrl
  50. ↵
    1. Elkins A,
    2. Freitas FF,
    3. Sanz V
    . Developing an app to interpret chest X-rays to support the diagnosis of respiratory pathology with artificial intelligence. J Med Artif Intell 2020; 3: 8. doi:10.21037/jmai.2019.12.01
    OpenUrl
  51. ↵
    1. Nam JG,
    2. Kim M,
    3. Park J, et al.
    Development and validation of a deep learning algorithm detecting 10 common abnormalities on chest radiographs. Eur Respir J 2021; 57: 2003961. doi:10.1183/13993003.03061-2020
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Hwang EJ,
    2. Park S,
    3. Jin KN, et al.
    Development and validation of a deep learning-based automated detection algorithm for major thoracic diseases on chest radiographs. JAMA Netw Open 2019; 2: e191095. doi:10.1001/jamanetworkopen.2019.1095
    OpenUrl
  53. ↵
    1. Chen KC,
    2. Yu HR,
    3. Chen WS, et al.
    Diagnosis of common pulmonary diseases in children by X-ray images and deep learning. Sci Rep 2020; 10: 17374. doi:10.1038/s41598-020-73831-5
    OpenUrlPubMed
  54. ↵
    1. Gipson J,
    2. Tang V,
    3. Seah J, et al.
    Diagnostic accuracy of a commercially available deep-learning algorithm in supine chest radiographs following trauma. Br J Radiol 2022; 95: 20210979. doi:10.1259/bjr.20210979
    OpenUrl
  55. ↵
    1. Jin KN,
    2. Kim EY,
    3. Kim YJ, et al.
    Diagnostic effect of artificial intelligence solution for referable thoracic abnormalities on chest radiography: a multicenter respiratory outpatient diagnostic cohort study. Eur Radiol 2022; 32: 3469–3479. doi:10.1007/s00330-021-08397-5
    OpenUrl
  56. ↵
    1. Shin HJ,
    2. Son NH,
    3. Kim MJ, et al.
    Diagnostic performance of artificial intelligence approved for adults for the interpretation of pediatric chest radiographs. Sci Rep 2022; 12: 10215. doi:10.1038/s41598-022-14519-w
    OpenUrl
  57. ↵
    1. Seah J,
    2. Tang C,
    3. Buchlak QD, et al.
    Do comprehensive deep learning algorithms suffer from hidden stratification? A retrospective study on pneumothorax detection in chest radiography. BMJ Open 2021; 11: e053024. doi:10.1136/bmjopen-2021-053024
    OpenUrlAbstract/FREE Full Text
  58. ↵
    1. Chen B,
    2. Li J,
    3. Guo X, et al.
    DualCheXNet: dual asymmetric feature learning for thoracic disease classification in chest X-rays. Biomed Signal Process Control 2019; 53: 101554. doi:10.1016/j.bspc.2019.04.031
    OpenUrl
  59. ↵
    1. Thian YL,
    2. Ng DW,
    3. Hallinan J, et al.
    Effect of training data volume on performance of convolutional neural network pneumothorax classifiers. J Digit Imaging 2022; 35: 881–892. doi:10.1007/s10278-022-00594-y
    OpenUrl
  60. ↵
    1. Wang Y,
    2. Sun L,
    3. Jin Q
    . Enhanced diagnosis of pneumothorax with an improved real-time augmentation for imbalanced chest X-rays data based on DCNN. IEEE/ACM Trans Comput Biol Bioinform 2021; 18: 951–962. doi:10.1109/TCBB.2019.2911947
    OpenUrl
  61. ↵
    1. Lin CH,
    2. Wu JX,
    3. Li CM, et al.
    Enhancement of chest X-ray images to improve screening accuracy rate using iterated function system and multilayer fractional-order machine learning classifier. IEEE Photonics J 2020; 12: 4100218. doi:10.1109/JPHOT.2020.3013193
    OpenUrl
  62. ↵
    1. Choi SY,
    2. Park S,
    3. Kim M, et al.
    Evaluation of a deep learning-based computer-aided detection algorithm on chest radiographs: case–control study. Medicine 2021; 100: e25663. doi:10.1097/MD.0000000000025663
    OpenUrl
  63. ↵
    1. Iqbal T,
    2. Shaukat A,
    3. Akram MU, et al.
    A hybrid VDV model for automatic diagnosis of pneumothorax using class-imbalanced chest X-rays dataset. IEEE Access 2022; 10: 27670–27683. doi:10.1109/ACCESS.2022.3157316
    OpenUrl
  64. ↵
    1. Rueckel J,
    2. Trappmann L,
    3. Schachtner B, et al.
    Impact of confounding thoracic tubes and pleural dehiscence extent on artificial intelligence pneumothorax detection in chest radiographs. Invest Radiol 2020; 55: 792–798. doi:10.1097/RLI.0000000000000707
    OpenUrlPubMed
  65. ↵
    1. Kakkar B,
    2. Johri P,
    3. Kumar Y, et al.
    An IoMT-based federated and deep transfer learning approach to the detection of diverse chest diseases using chest X-rays. Hum Centric Comput Inf Sci 2022; 12: 24. doi:10.22967/HCIS.2022.12.024
    OpenUrl
  66. ↵
    1. Lee SY,
    2. Ha S,
    3. Jeon MG, et al.
    Localization-adjusted diagnostic performance and assistance effect of a computer-aided detection system for pneumothorax and consolidation. NPJ Digit Med 2022; 5: 107. doi:10.1038/s41746-022-00658-x
    OpenUrl
  67. ↵
    1. Shamrat F,
    2. Azam S,
    3. Karim A, et al.
    LungNet22: a fine-tuned model for multiclass classification and prediction of lung disease using X-ray images. J Pers Med 2022; 12: 680. doi:10.3390/jpm12050680
    OpenUrl
  68. ↵
    1. Hong M,
    2. Rim B,
    3. Lee H, et al.
    Multi-class classification of lung diseases using CNN models. Appl Sci 2021; 11: 9289. doi:10.3390/app11199289
    OpenUrl
  69. ↵
    1. Cho Y,
    2. Park B,
    3. Lee SM, et al.
    Optimal number of strong labels for curriculum learning with convolutional neural network to classify pulmonary abnormalities in chest radiographs. Comput Biol Med 2021; 136: 104750. doi:10.1016/j.compbiomed.2021.104750
    OpenUrl
  70. ↵
    1. Kim EY,
    2. Kim YJ,
    3. Choi WJ, et al.
    Performance of a deep-learning algorithm for referable thoracic abnormalities on chest radiographs: a multicenter study of a health screening cohort. PLoS One 2021; 16: e0246472.10.1371/journal.pone.0246472
    OpenUrl
  71. ↵
    1. Rueckel J,
    2. Huemmer C,
    3. Fieselmann A, et al.
    Pneumothorax detection in chest radiographs: optimizing artificial intelligence system for accuracy and confounding bias reduction using in-image annotations in algorithm training. Eur Radiol 2021; 31: 7888–7900. doi:10.1007/s00330-021-07833-w
    OpenUrl
  72. ↵
    1. Luo JX,
    2. Liu WF,
    3. Yu L
    . Pneumothorax recognition neural network based on feature fusion of frontal and lateral chest X-ray images. IEEE Access 2022; 10: 53175–53187. doi:10.1109/ACCESS.2022.3175311
    OpenUrl
  73. ↵
    1. Mangalmurti Y,
    2. Wattanapongsakorn N
    . Practical machine learning techniques for COVID-19 detection using chest. Intelligent Automat Soft Comput 2022; 34: 733–752. doi:10.32604/iasc.2022.025073
    OpenUrl
  74. ↵
    1. Kitamura G,
    2. Deible C
    . Retraining an open-source pneumothorax detecting machine learning algorithm for improved performance to medical images. Clin Imaging 2020; 61: 15–19. doi:10.1016/j.clinimag.2020.01.008
    OpenUrlCrossRef
  75. ↵
    1. Park S,
    2. Kim G,
    3. Oh Y, et al.
    Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat Commun 2022; 13: 3848. doi:10.1038/s41467-022-31514-x
    OpenUrl
  76. ↵
    1. Lai KHA,
    2. Ma SK
    . Sensitivity and specificity of artificial intelligence with Microsoft Azure in detecting pneumothorax in emergency department: a pilot study. Hong Kong J Emerg Med 2023; 30: 8–15. doi:10.1177/10249079209489
    OpenUrl
  77. ↵
    1. Wang H,
    2. Jia H,
    3. Lu L, et al.
    Thorax-Net: an attention regularized deep neural network for classification of thoracic diseases on chest radiography. IEEE J Biomed Health Inform 2020; 24: 475–485. doi:10.1109/JBHI.2019.2928369
    OpenUrl
  78. ↵
    1. Cicero M,
    2. Bilbily A,
    3. Colak E, et al.
    Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Invest Radiol 2017; 52: 281–287. doi:10.1097/RLI.0000000000000341
    OpenUrl
  79. ↵
    1. Wang H,
    2. Wang S,
    3. Qin Z, et al.
    Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med Image Anal 2021; 67: 101846. doi:10.1016/j.media.2020.101846
    OpenUrl
  80. ↵
    1. Draelos RL,
    2. Dov D,
    3. Mazurowski MA, et al.
    Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes. Med Image Anal 2021; 67: 101857. doi:10.1016/j.media.2020.101857
    OpenUrlCrossRef
  81. ↵
    1. Li X,
    2. Thrall JH,
    3. Digumarthy SR, et al.
    Deep learning-enabled system for rapid pneumothorax screening on chest CT. Eur J Radiol 2019; 120: 108692. doi:10.1016/j.ejrad.2019.108692
    OpenUrl
  82. ↵
    1. Lyu W,
    2. Xia F,
    3. Zhou C, et al.
    [Application of deep learning-based chest CT auxiliary diagnosis system in emergency trauma patients]. Zhonghua Yi Xue Za Zhi 2021; 101: 481–486. doi:10.3760/cma.j.cn112137-20201117-03123
    OpenUrl
  83. ↵
    1. Röhrich S,
    2. Schlegl T,
    3. Bardach C, et al.
    Deep learning detection and quantification of pneumothorax in heterogeneous routine chest computed tomography. Eur Radiol Exp 2020; 4: 26. doi:10.1186/s41747-020-00152-7
    OpenUrl
  84. ↵
    1. Lee CC,
    2. Lin CS,
    3. Tsai CS, et al.
    A deep learning-based system capable of detecting pneumothorax via electrocardiogram. Eur J Trauma Emerg Surg 2022; 48: 3317–3326. doi:10.1007/s00068-022-01904-3
    OpenUrl
  85. ↵
    1. Kuo PC,
    2. Tsai CC,
    3. López DM, et al.
    Recalibration of deep learning models for abnormality detection in smartphone-captured chest radiograph. NPJ Digit Med 2021; 4: 25. doi:10.1038/s41746-021-00393-9
    OpenUrl
  86. ↵
    1. Li F,
    2. Shi JX,
    3. Yan L, et al.
    Lesion-aware convolutional neural network for chest radiograph classification. Clin Radiol 2021; 76: 155.e1–155.e14. doi:10.1016/j.crad.2020.08.027
    OpenUrl
  87. ↵
    1. Cho Y,
    2. Kim JS,
    3. Lim TH, et al.
    Detection of the location of pneumothorax in chest X-rays using small artificial neural networks and a simple training process. Sci Rep 2021; 11: 13054. doi:10.1038/s41598-021-92523-2
    OpenUrlCrossRef
  88. ↵
    1. Chan KK,
    2. Joo DA,
    3. McRae AD, et al.
    Chest ultrasonography versus supine chest radiography for diagnosis of pneumothorax in trauma patients in the emergency department. Cochrane Database Syst Rev 2020; 7: CD013031. doi:10.1002/14651858.CD013031.pub2
    OpenUrl
  89. ↵
    1. Wiens J,
    2. Saria S,
    3. Sendak M, et al.
    Do no harm: a roadmap for responsible machine learning for health care. Nat Med 2019; 25: 1337–1340. doi:10.1038/s41591-019-0548-6
    OpenUrlCrossRefPubMed
PreviousNext
Back to top
View this article with LENS
Vol 32 Issue 168 Table of Contents
European Respiratory Review: 32 (168)
  • Table of Contents
  • Index by author
Email

Thank you for your interest in spreading the word on European Respiratory Society .

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Deep learning for pneumothorax diagnosis: a systematic review and meta-analysis
(Your Name) has sent you a message from European Respiratory Society
(Your Name) thought you would like to see the European Respiratory Society web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
Citation Tools
Deep learning for pneumothorax diagnosis: a systematic review and meta-analysis
Takahiro Sugibayashi, Shannon L. Walston, Toshimasa Matsumoto, Yasuhito Mitsuyama, Yukio Miki, Daiju Ueda
European Respiratory Review Jun 2023, 32 (168) 220259; DOI: 10.1183/16000617.0259-2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Deep learning for pneumothorax diagnosis: a systematic review and meta-analysis
Takahiro Sugibayashi, Shannon L. Walston, Toshimasa Matsumoto, Yasuhito Mitsuyama, Yukio Miki, Daiju Ueda
European Respiratory Review Jun 2023, 32 (168) 220259; DOI: 10.1183/16000617.0259-2022
del.icio.us logo Digg logo Reddit logo Technorati logo Twitter logo CiteULike logo Connotea logo Facebook logo Google logo Mendeley logo
Full Text (PDF)

Jump To

  • Article
    • Abstract
    • Abstract
    • Introduction
    • Methods
    • Results
    • Discussion
    • Supplementary material
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Subjects

  • Lung imaging
  • Respiratory clinical practice
  • Tweet Widget
  • Facebook Like
  • Google Plus One

More in this TOC Section

  • Unsupervised home spirometry versus supervised clinic spirometry for respiratory disease
  • Peripheral blood monocyte count and outcomes in patients with ILD
  • Effects of CPAP therapy on glucose metabolism in patients with OSA and type 2 diabetes
Show more Reviews

Related Articles

Navigate

  • Home
  • Current issue
  • Archive

About the ERR

  • Journal information
  • Editorial board
  • Press
  • Permissions and reprints
  • Advertising
  • Sponsorship

The European Respiratory Society

  • Society home
  • myERS
  • Privacy policy
  • Accessibility

ERS publications

  • European Respiratory Journal
  • ERJ Open Research
  • European Respiratory Review
  • Breathe
  • ERS books online
  • ERS Bookshop

Help

  • Feedback

For authors

  • Instructions for authors
  • Publication ethics and malpractice
  • Submit a manuscript

For readers

  • Alerts
  • Subjects
  • RSS

Subscriptions

  • Accessing the ERS publications

Contact us

European Respiratory Society
442 Glossop Road
Sheffield S10 2PX
United Kingdom
Tel: +44 114 2672860
Email: journals@ersnet.org

ISSN

Print ISSN: 0905-9180
Online ISSN: 1600-0617

Copyright © 2023 by the European Respiratory Society