Abstract
Maintenance of long-term lung allograft health in lung transplant recipients (LTRs) requires a fine balancing act between providing sufficient immunosuppression to reduce the risk of rejection whilst at the same time not over-immunosuppressing individuals and exposing them to the myriad of immunosuppressant drug side-effects that can cause morbidity and mortality. At present, lung transplant physicians only have limited and rather blunt tools available to assist them with this task. Although therapeutic drug monitoring provides clinically useful information about single time point and longitudinal exposure of LTRs to immunosuppressants, it lacks precision in determining the functional level of immunosuppression that an individual is experiencing. There is a significant gap in our ability to monitor lung allograft health and therefore tailor optimal personalised immunosuppression regimens. Molecular diagnostics performed on blood, bronchoalveolar lavage or lung tissue that can detect early signs of subclinical allograft injury, differentiate rejection from infection or distinguish cellular from humoral rejection could offer clinicians powerful tools in protecting lung allograft health. In this review, we look at the current evidence behind molecular monitoring in lung transplantation and ask if it is ready for routine clinical use. Although donor-derived cell-free DNA and tissue transcriptomics appear to be the techniques with the most immediate clinical potential, more robust data are required on their performance and additional clinical value beyond standard of care.
Tweetable abstract
A comprehensive review of potential diagnostic biomarkers aimed at improving the monitoring of lung transplant recipients. Gain insights into the underlying biology, advantages, limitations and technical pitfalls associated with each approach. https://bit.ly/46MSQ8v
Introduction
Long-term care of lung transplant recipients (LTRs) is sometimes akin to tightrope walking, requiring the careful management of a precarious balance between rejection, infection risks and immunosuppression side-effects. The lungs are at added risk due to their continuous direct exposure to the external environment.
Lung immune responses are therefore highly primed to respond as the lungs are the main “filter” of environmental insults, making control of such immune activation in lung transplantation (LT) so not to trigger graft-directed alloimmunity a particular challenge [1]. This is why despite improvements in donor organ optimisation, surgical techniques and early post-operative management, median survival after LT is estimated to be about 6 years, which is much shorter than in most other solid organ transplants [2, 3].
Acute rejection (AR), both cell-mediated and antibody-mediated, as well as infections, have been associated with an increased risk of developing chronic lung allograft dysfunction (CLAD) [4, 5]. It is the primary obstacle to achieving long-term survival, as it affects about 50% of LTRs within 5 years post-transplantation [6]. Therefore, personalisation of immunosuppression regimens to provide enough suppression to reduce rejection risk but not too much as to increase infection risk seems critical to our attempts to improve long-term outcomes.
Routine monitoring of LTRs is mainly based on the use of pulmonary function tests to identify allograft dysfunction. However, when there is a measurable drop in lung function, beyond the inherent 10% variability of the tests, this means there is already an established graft injury, which may or may not be recoverable with treatment. Detecting early signs of immune system activation that pre-empts graft damage or identifying initial markers of graft injury before a measurable loss of function is manifest would help clinicians make pre-emptive changes to treatment that might better protect the organ's longevity.
An additional challenge clinicians face is that the cause of a fall in graft function is often unclear and may be multifactorial, such as co-existing rejection and infection. Fibreoptic bronchoscopy (FOB) with bronchoalveolar lavage (BAL) and transbronchial biopsy (TBBx) of lung tissue remains the current gold standard to investigate any decline in lung function [7, 8]. FOB enables clinicians to obtain microbiological and cytological samples and to histologically analyse lung tissue, but it is subject to many limitations. These include the risk of complications, pain or distress for the patient, and insufficient TBBx results due to a lack of sufficient material and/or to inter-observer variability in pathologist reviews [9–11]. In addition, this tool is not well suited for monitoring stable patients.
Donor-specific antibody (DSA) testing of blood is another cornerstone of graft assessment. However, it is also subject to major variations in the interpretation of results and is frequently combined with imaging and lung tissue analysis for optimal treatment decisions [12, 13]. Analysing lung samples, DSA testing and monitoring serum levels of immunosuppressive drugs are certainly not sufficient to provide true personalised medicine for LTRs. There is definitely a need for more objective, sensitive and accurate molecular tools to monitor the immune systems of our patients if we are to improve long-term survival.
The development of molecular diagnostic tools for monitoring graft function has progressed significantly over recent years (figure 1). Some are already commercially available and have been incorporated into clinical practice in selected centres [14]. This review examines the range of molecular biomarkers available to help LT clinicians personalise the care of LTRs through early detection of AR, infection or graft injury pre-disposing the development of CLAD and to guide immunosuppression regimens. In each case, we ask what evidence for their efficacy is available and how close to routine clinical use these assays are.
The method used to carry out the literature search is described in the supplementary material.
Donor-derived cell-free DNA
Biology of cell-free DNA
Circulating cell-free DNA (cfDNA) consists of short fragments of DNA released by necrotic or apoptotic cells, making them a direct marker of tissue injury. They can also, in some cases, be secreted by healthy cells. They circulate in the blood, bound to histones, with a half-life between 15 and 120 min [15]. cfDNA is degraded in blood by enzymes as DNase I and then eliminated by macrophages in the liver and spleen and by glomerular filtration in the kidneys. cfDNA is normally rapidly cleared, but if excessive production outstrips clearance capacity it leads to cfDNA accumulation [15]. In healthy individuals, cfDNA level correlates with age and sex [16]. After organ transplantation, circulating cfDNA is represented by two different genomes, namely those of the donor and the recipient. Levels of donor-derived cfDNA (dd-cfDNA) increase in cases of damage to the allograft; hence the interest in this marker as a potential diagnostic. Several approaches have been developed to discriminate between the donor and recipient fractions of cfDNA. The first uses available population genomic data to define a set of common and informative single nucleotide polymorphisms to distinguish between the donor and recipient [17]. This technique involves next-generation sequencing, but the donor and recipient genomes do not need to be sequenced. The second approach is a targeted one; for example, focusing on donor–recipient human leukocyte antigen (HLA) mismatch [18].
dd-cfDNA is reported as a fraction of total cfDNA. Elevation of the recipient fraction, due to other organ injury, infection or recent surgery, might affect the fractional determination. To overcome this limitation, both absolute and fractional quantifications should be considered during interpretation [19]. cfDNA has two uses, namely, it acts as marker of tissue injury and has pro-inflammatory properties [20, 21].
Practical considerations
Several commercial assays are available to measure dd-cfDNA in LTRs. Blood should be collected in proprietary tubes containing a preservative that prevents the release of DNA from blood cells, allowing the isolation of high-quality cfDNA. Depending on the test, samples should be sent at room temperature to a central laboratory or can be tested locally using commercial kits. The availability of results can therefore be impacted by shipping time and varies from 3 to 10 days.
Dynamics and interpretation of dd-cfDNA after LT
In LTRs without rejection or infection, dd-cfDNA levels are highly elevated during the first 2 weeks post-transplant, demonstrating organ injury due to ischaemia–reperfusion. dd-cfDNA levels then subsequently decrease over time, reaching a steady-state level 2–4 months after LT [18, 22–24]. Compared with other solid organ transplants, dd-cfDNA remains elevated longer in LTRs throughout the post-transplant course with a higher mean fraction (dd-cfDNA/total cfDNA range 2–5%) than that observed in cardiac (range 0.06–0.6%) or renal transplantation (range 0.3–1.2%), but comparable to that of liver transplantation (range 3.3–5%) [25]. These higher levels seen in lung and liver transplantation could be related to the greater cell mass of the transplanted organ. Similarly, the dd-cfDNA level is lower in patients who underwent unilateral LT versus bilateral LT, both in stable controls and AR cases [14, 26, 27]. The interpretation of dd-cfDNA values in single LT differs between studies. Some authors did not adjust values for lung mass [28, 29], while others corrected them by doubling single LTR values [24, 27].
dd-cfDNA and AR
While Pedini et al. [30] recently suggested that dd-cfDNA could be a useful marker of graft injury as early as from 15 days post-transplant, most studies examining dd-cfDNA in the diagnosis of AR have been after the first 45–60 days post-transplantation, given the high variability of dd-cfDNA in the first few weeks post-transplant [18, 23].
In cases of histologically proven acute cellular rejection (ACR), most studies found that levels of dd-cfDNA were higher than in stable controls [18, 23, 28, 31, 32]. Rosenheck et al. [32] suggested that a threshold of ≥1% dd-cfDNA fraction, when combined with standard clinical assessments, demonstrated good diagnostic performance for AR, with a sensitivity of 89.1% (95% CI 76.2–100.0%), a specificity of 82.9% (95% CI 73.3–92.4%), a positive predictive value of 51.9% (95% CI 37.5–66.3%) and a negative predictive value of 97.3% (95% CI 94.3–100%), suggesting some utility in excluding AR but a lack of specificity in diagnosing it. These thresholds may nevertheless differ across assays and these results have not been confirmed in all studies.
In a prospective cohort of 148 LTRs, levels of dd-cfDNA correlated with the histological grading of ACR [23]. This was confirmed in two other studies, with De Vlaminck et al. [31] suggesting that a threshold of 1% of dd-cfDNA had an area under the curve (AUC) of 0.90 for the diagnosis of ≥moderate-to-severe ACR (A3) [18, 31]. The performance of dd-cfDNA in distinguishing between patients without rejection and patients with minimal to severe rejection (A0 versus A1–A4) was lower, with an AUC of 0.70. A similar result was found in a cohort of 38 LTRs, where dd-cfDNA had an AUC of 0.77 for diagnosing a composite tissue injury event (i.e., ACR+antibody-mediated rejection (AMR)+bronchiolitis obliterans syndrome (BOS)) [28]. It has also been shown that the severity of lung function decline in ACR correlated with the levels of dd-cfDNA [23].
Given the limitations of the current consensus diagnostic criteria for AMR, the potential value of dd-cfDNA is being actively studied, but so far data are scarce. Two studies found a higher increase in dd-cfDNA fraction and a more prominent spirometric impairment in patients with AMR compared to those with ACR [23, 33]. This might suggest a greater level of allograft damage in AMR [23]. Importantly, two studies found no increase of dd-cfDNA fraction in the case of a positive (mean fluorescence intensity >1000) de novo DSA test, even in the absence of AMR [18, 23].
Whether dd-cfDNA can enable early and accurate diagnosis of AR is of considerable clinical importance in order to avoid loss of graft function. However, it remains a matter of debate and may depend on the type of AR. Jang et al. [23] showed that a rise in dd-cfDNA fraction was detectable 1 month before the diagnosis of AR in only 18% of patients with ACR, but in 82% of patients with AMR.
In all these studies, dd-cfDNA level was also elevated in cases of pulmonary infection, with no statistical differences in dd-cfDNA fraction between infection and rejection. The main limitation to the diagnostic effectiveness of using dd-cfDNA to identify ACR is therefore its lack of specificity. Elevation of dd-cfDNA fraction alone is not sufficient to diagnose AR and must be combined with microbiological and DSA tests. Due to its high negative predictive value, dd-cfDNA might be a useful tool to rule out AR. However, one must also acknowledge that there is an overlap between the values observed in stable patients and those with AR, which can complicate the interpretation of results at any single time point [23]. A normal value of dd-cfDNA fraction might therefore be insufficient to rule out the diagnosis of low-grade ACR, especially in cases of high clinical suspicion.
Finally, the usefulness of monitoring dd-cfDNA to confirm the resolution of AR after treatment has been suggested in kidney transplantation, although sufficient data on this use is currently lacking in LTRs [26, 34, 35].
dd-cfDNA and lung infection
Most studies showed that the dd-cfDNA level was elevated in lung allograft infection, as in AR, but with no significant difference in levels between these two situations [23, 32]. Some studies showed contradictory results, with no correlation between the level of dd-cfDNA and the diagnosis of graft infection [31, 36].
The ability of dd-cfDNA to distinguish between colonisation and active infection when a positive culture result is present is not clear, suggesting them being a continuum [28, 32]. In the study by Bazemore et al. [37], the levels of dd-cfDNA in cases of microbial isolates in BAL were not significantly different, regardless of whether FOB was performed for routine monitoring or was prompted by a clinical symptom.
Several factors have been associated with higher levels of dd-cfDNA when a pathogen is isolated from BAL, including identification of Pseudomonas aeruginosa, influenza viruses or Aspergillus fumigatus, raised C-reactive protein and the presence of concomitant abnormal histopathology [18, 37]. Since all three of the organisms mentioned above have been independently associated with an increased risk of CLAD development, it suggests that elevated dd-cfDNA levels may detect infection-associated subclinical injury, undetectable by histology, and therefore identify LTRs at higher risk of further lung function decline [37]. Finally, Pedini et al. [30] suggested that quantification of small fragments of dd-cfDNA (80–120 bp) higher than 3.7% might differentiate infection from rejection in the early post-operative period.
dd-cfDNA and CLAD
Several studies have suggested that dd-cfDNA tests have the potential to identify patients at a higher risk of CLAD development. This ability could signify significant progress, as these patients could potentially benefit from earlier therapeutic interventions [28, 37]. In a prospective study of 99 LTRs, patients with primary graft dysfunction (PGD) had higher dd-cfDNA fractions in comparison with those without PGD (median (interquartile range, IQR): 12.2% (8.2–22.0) versus 8.5% (5.6–13.2), p=0.01) [29]. Furthermore, in PGD patients, levels of dd-cfDNA correlated with the subsequent risk of CLAD development (log OR (se) 1.38 (0.53), p=0.009) [29].
Agbor-Enoh et al. [24] studied 1145 samples from 106 LTRs and demonstrated that the average dd-cfDNA fraction in the first 3 months post-LT was highly variable from patient to patient, but correlated with the development of late allograft failure and all-cause mortality. Dividing their study population into three tertiles according to dd-cfDNA level in the first 3 months post-LT, they showed that LTRs in the upper tertile had a 6.6-fold higher risk of developing allograft failure (95% CI 1.6–19.9, p=0.007) compared to patients in the middle and low tertiles. Of note, patients in the upper tertile had other characteristics predictive of lower long-term lung function, including older age, higher frequency of donor smoking, single LT, AR and positive DSA.
Ju et al. [36] recently demonstrated that dd-cfDNA levels were higher in patients with established CLAD (median 1.07, IQR 0.98–1.31) than in stable patients (median 0.71, IQR 0.61–0.84), but there was overlap between levels in CLAD and AR, confirming similar previously published results [28, 38]. A study conducted on BAL samples showed that cfDNA could distinguish between stable, BOS and restrictive allograft syndrome patients and was associated with overall survival [39]. These results are interesting but do not add anything to the diagnosis provided by the diagnostic tools already clinically available. The unmet need is the possibility of early diagnosis of graft injury that pre-disposes to CLAD, which could lead to the early initiation of treatment to protect lung function.
dd-cfDNA and immunosuppression management
Currently, there is a lack of LT data on dd-cfDNA to guide immunosuppression. In kidney transplantation, it has been hypothesised that serial measures of dd-cfDNA could help tailor immunosuppression regimens, with persistent elevated levels of dd-cfDNA indicating incomplete recovery after an immunological or infectious event [40].
dd-cfDNA – current limitations
In conclusion, the sensitivity of using dd-cfDNA to detect lung allograft injury is very promising but at present the test lacks specificity and cannot reliably distinguish between ACR, AMR and infection. Its clinical utility at present is therefore limited to potentially providing reassurance that there is no sub-clinical allograft injury. However, to make a positive diagnosis of the cause of graft injury it should always be combined with other tests, such as microbiological sampling and DSA testing as the very least. dd-cfDNA might serve as a useful adjunct to existing approaches for allograft monitoring, particularly in ruling out sub-clinical graft injury, given its high negative predictive value. Nonetheless, it cannot yet replace current diagnostic tools based on the available data.
Torque teno virus
Biology
Torque teno viruses (TTVs) are ubiquitous small DNA viruses that belong to the family Anelloviridae. They are detectable in plasma in up to 90% of healthy individuals [41, 42]. TTV can also be identified in most tissues and cells, except in red blood cells and platelets [43]. No human pathogenicity of TTV has been established so far [42].
Practical considerations
Quantitative PCR can be used to assess TTV levels in the blood, enabling analysis of the presence and quantity of TTV in white blood cells, which are key reservoirs of the virus. Blood samples can be kept at room temperature. Several PCR assays have been developed to detect and quantify TTV level, some of which being widely available commercially. Diagnostic sensitivity can vary depending on the primers used [42].
Dynamics after LT
After solid organ transplantation, TTV blood level increases under the influence of immunosuppressive therapy to reach a steady state that correlates with the intensity of immunosuppression [42, 44]. Görzer et al. [45] showed that TTV DNA was detectable in 93% of patients before LT, increased in all patients during the first 3 months post-transplant and reached a peak level between 41 and 92 days post-LT (median 67 days). TTV load might therefore become a useful marker of the functional intensity of immunosuppression in the vast majority of LTRs [46].
TTV and rejection
The usefulness of TTV load in LTR management is supported by the link between TTV copies in peripheral blood and host immunosuppression. A low TTV load may indicate an active immune system with a higher risk of rejection. Conversely, a high TTV load may be a sign of an over-suppressed immune system with a higher risk of infection [43]. Unfortunately, although an attractive paradigm, clinical data are limited and, where available, contradictory in LT.
In a prospective cohort of 143 LTRs, baseline TTV level, defined as the maximum log10 copies·mL−1 in the first 3 months, was not predictive of the occurrence of ACR. Then, by considering only the lowest value of TTV load within each 3-month period, the authors demonstrated that the lower this minimal value was, the higher the risk of ACR (hazard ratio (HR) 0.48, 95% CI 0.26–0.88, p=0.018) [46]. Similar results were found in a smaller study including 34 LTRs in which the authors suggested that a 10-fold decrease of TTV DNA level had a sensitivity of 0.74 and specificity of 0.99 for the diagnosis of ACR [47].
Notably, no correlation was found between TTV load and episodes of AMR. This is not surprising as it is suspected that plasma TTV level is mainly controlled by T-cell-mediated immunity [45].
Contradictory results were obtained in a prospective cohort of 98 LTRs followed for 2 years after LT, where no association between TTV load and AR was found [48].
TTV and infection
In the work from Jaksch et al. [46], high maximum levels of TTV viral load during the first 3 months post-LT strongly increased the risk of infection (HR 5.0, 95% CI 2.94–8.67, p<0.001), with again no confirmation in the study by Nordén et al. [48].
TTV and CLAD
It has been suggested that higher TTV levels during the 3-month time window were associated with a reduced risk of CLAD (HR 0.71, 95% CI 0.54–0.93). For each log10 copies·mL−1 increase, the risk of CLAD reduced to about 70% [46].
TTV and immunosuppressive treatment
TTV level is not correlated with mean tacrolimus blood levels but may be a more precise marker of functional immunosuppression [45, 47]. Indeed, it has recently been demonstrated that humoral response to the severe acute respiratory syndrome coronavirus 2 vaccine was better in subjects with a lower TTV load pre-vaccination [49, 50].
TTV – current limitations
Data about the usefulness of TTV in LTR management are for the moment contradictory and limited and so it is not ready to be used in routine clinical practice. Interpreting TTV levels has proven to be complex due to the absence of a clearly defined “normal” range. The results of ongoing studies including a larger number of LT patients will help us obtain a clearer vision of the potential of this promising biomarker [51].
Molecular diagnosis by tissue transcriptomics
Biology
TBBx is currently viewed as the gold standard approach to the diagnosis of ACR and a key modality used for the diagnosis of AMR [8, 13]. However, the sensitivity and specificity of histological diagnosis is limited and subject to inter-operative variability in reporting. Use of gene expression or transcriptomic analysis by microarray techniques on RNA extracted from lung tissue reveals expression levels of potentially thousands of genes simultaneously, thanks to plates containing multiple DNA sequence probes [52]. These probes will hybridise with their complementary sequence if present in the sample tested [53]. The results indicate how close the tissue is to a specific molecular phenotype (of rejection, infection or CLAD) using a score.
Practical considerations
Commercial laboratories offering this approach as a service require one or preferably two transbronchial or endobronchial biopsies (from the third airway bifurcation) be sent to a central laboratory at room temperature. Results are usually available in 2 days, plus shipping time.
Tissue transcriptomics and rejection
In the INTERLUNG study, a prediction model of ACR was built by applying machine-learning algorithms to lung tissue transcriptomic data [54]. This prospective multicentre study included 457 TBBx and 314 bronchial biopsies (BBx). A molecular signature for ACR was defined using the top 200 genes associated with histologic rejection in kidney transplantation. The Halloran group showed that some elements of the ACR molecular signature were strongly associated with ACR in TBBx, confirming previous results [54–56]. Unfortunately, no AMR signature could be identified.
Tissue transcriptomics and infection
Transcriptomics using microarray techniques have shown promising results in the diagnosis of pulmonary infection, especially in intensive care unit patients [57]. Unfortunately, in the studies by Halloran et al. [54, 55], no specific molecular signature was associated with allograft infection with significant heterogeneity in the gene expression profiles.
Tissue transcriptomics and CLAD
Parkes et al. [58] studied the whole-genome RNA profile on TBBx of LTRs, including patients with CLAD, to better understand the molecular mechanisms underlying allograft dysfunction. They found that CLAD exhibits characteristics of a wound-healing process, marked by the increased expression of genes such as hypoxia inducible factor 1 subunit alpha, serpin family E member 2 and insulin like growth factor 1, rather than an inflammatory profile. More recently, they identified in BBx the top-20 CLAD-selective transcripts, with the aim of building a classifier to predict CLAD. These genes did not overlap with those identified in TBBx. Their ability to predict CLAD was good, with an AUC of 0.7, but lower than the AUC for time (0.83) [59]. These results have no clinical application to date [58].
Tissue transcriptomics – current limitations
Although the technique offers promising insights into immune responses at the tissue level, its role in routine patient care is unclear at present, with additional data on utility needed. Furthermore, interlaboratory variability of microarray-based assays and of data analysis may also be an issue that needs to be evaluated [60, 61]. The combination of tissue transcriptomics with other molecular tools, such as dd-cfDNA, could enhance the performance of each technique compared to when used separately. This has been suggested in kidney transplantation, but has not been studied in LT to date [62, 63]. A future opportunity would be to study the diagnostic performances of peripheral blood leucocyte transcriptomics in order to reduce the need for invasive sampling [64].
MicroRNAs in blood and BAL
Biology
MicroRNAs (miRNAs) are small segments of RNA. They regulate gene expression by silencing their target mRNAs, inhibiting their translation into proteins. They are powerful regulators of the expression of the genome, therefore playing critical roles in a broad range of cellular functions [65]. They are crucial in the regulation of immunological processes that are often dysregulated in respiratory diseases. Their role in many respiratory conditions, such as acute lung injury, asthma, COPD and lung fibrosis, has been previously demonstrated [66]. The type and level of expression of specific miRNAs could have potential utility as biomarkers of immunological processes.
Practical considerations
MiRNAs can be detected in blood as well as in BAL fluid. Of note, haemolysis impacts the analysis of miRNAs [67]. Various techniques such as quantitative reverse transcription PCR, microarrays and next-generation RNA sequencing can be used to quantify miRNAs. Biological samples must be frozen for storage, at various temperatures from −20°C to −80°C, depending on the technique used. There is no commercially available assay specifically dedicated to LT at present [60, 68].
Dynamics after LT
In a study including 18 LTRs from 1 to 30 months post-transplantation, it was demonstrated that the LTRs had a dysregulated expression profile of blood miRNAs compared to their healthy counterparts [69].
MiRNAs and rejection
Palleschi et al. [67] and colleagues profiled miRNAs in BAL performed at 7 (T0), 14 and 90 days post-LT. Studying 16 LTRs, they identified a signature of 14 miRNAs measured at T0 that distinguished patients who developed AR during follow-up. These miRNAs are known to be associated with an inflammatory state. These preliminary results suggest that miRNA expression profiling may help to predict the onset of AR and therefore could allow pre-emptive increases in immunosuppression to reduce the risk of AR.
MiRNA and infection
While miRNAs have been widely studied in lung infections, there are almost no data in the specific area of LT [66]. In the work from Palleshi et al. [67], no miRNA signature was associated with pneumonia in LTRs.
MiRNA and CLAD
Several studies have attempted to identify specific miRNAs associated with CLAD development, with pro-fibrotic miRNA-21 being the most promising candidate [70–72]. Its co-over expression with signal transducer and activator of transcription 3 could play a role in the development of obliterative bronchiolitis [73]. In a small pilot study, miRNA-21 showed a good discriminative power (AUC of 0.89) in confirming a CLAD diagnosis. Unfortunately, there was no difference in the level of this biomarker between CLAD and non-CLAD patients 1 year before and 1 year after the diagnosis of CLAD [72]. The elevated plasma level of miRNA-21 at the time of CLAD diagnosis could provide an insight into the pathophysiology of this process, although its utility as a diagnostic tool may be limited. Another interesting result is the identification of an miRNA signature in LTRs receiving extracorporeal photopheresis (ECP), when comparing a set of specific miRNAs in patients with established BOS grade 1–2 prior to the start and 6 months after ECP treatment. This could help us understand the mechanism of action of this treatment [74]. Unfortunately, in this study, baseline miRNA levels did not predict ECP treatment response.
MiRNA and immunosuppressive treatment
MiRNA expression profiles differed between LTRs who received induction therapy with alemtuzumab (anti-CD-52) and those who did not [75].
MiRNA – current limitations
Although miRNAs offer promise as a molecular diagnostic, they are a long way from routine clinical use. The very low initial input of miRNA in quantitative PCR can lead to bias from exponential amplification. Additionally, over-representation of sequences of low/no clinical significance complicates final analyses [76]. Use of miRNA measurement in clinical practice would need a consensus regarding the optimal methodology for their quantification and much larger studies in LT recipients [77].
DNA methylation markers
Biology
DNA methylation is an epigenetic process that regulates gene expression. It refers to the reversible addition of a methyl (CH3) group to the DNA strand itself [78]. DNA methylation is strongly linked to the aging process, a factor that must be taken into account when understanding its relevance in LT [79].
DNA methylation provides insights into the control of molecular pathways underlying the pathogenesis of lung diseases, but could also be a potential noninvasive biomarker for disease prevention, diagnosis and prognosis. Additionally, its reversibility makes it a potential target for drug development [78]. In lung diseases, DNA methylation has mainly been studied in lung cancer, but data exist in other conditions such as lung fibrosis, asthma and COPD [80, 81].
Practical considerations
DNA methylation can be analysed in peripheral blood, sputum or lung tissue. It is generally recommended to freeze biological samples to prevent changes in methylation levels that might occur during storage at room temperature. The turnaround time from sample to result varies, depending on the type of analysis performed, from several days to several weeks.
DNA methylation in rejection and infection
In the field of LT, very few studies on methylation have been published so far. No publication to date has studied the role of methylation in rejection and infection in LTRs.
DNA methylation in CLAD
Methylation status in donor airway epithelial cells was studied at 1 year post-LT from 13 subjects who experienced severe PGD and 15 controls matched on age. In this study, the epigenetic age of the donor, evaluated by the methylation status, was 6.5 years greater (95% CI 1.7–11.2) in recipients who had experienced PGD, but was not associated with CLAD-free survival risk (p=0.11) [82]. Other data on this subject have only been published as conference abstracts to date [83, 84].
DNA methylation – current limitations
In kidney transplantation, the role of epigenetic markers has been studied, but with no clear clinical application identified so far, mainly due to the substantial variation among studies, the lack of standardised and approved diagnostic procedures, and the small sample sizes [85]. Although a clinical application of DNA methylation remains distant in LT, changes in DNA methylation associated with immune pathways offer potential for noninvasive risk assessment of developing post-transplant complications such as AR and CLAD and may ultimately help identify new therapeutic targets [78, 86, 87].
Exosomes
Biology
Exosomes are small extracellular membranous vesicles that are released from cells upon fusion of an intracellular compartment, the multivesicular body, with the plasma membrane of the cell. They are critical in cell-to-cell communication and have been implicated in the development of several lung diseases such as COPD and pulmonary fibrosis, but may also become useful vectors for drugs [88–90]. Several clinical trials are investigating exosomes in lung diseases, especially in early lung cancer diagnosis and immune modulation of severe pneumonitis [91, 92].
Practical considerations
Exosomes can be isolated from serum or BAL fluid. The average concentration of exosomes isolated from BAL or blood ranges from 0.3 to 1 ng·µL−1. What is generally analysed is the protein and miRNA content of exosomes. Samples require exosome isolation as soon as possible after collection to avoid exosome degradation and then storage at 4°C or frozen depending on the downstream application. It is worth noting that many different techniques, including ultracentrifugation and size-exclusion chromatography, are available to isolate and investigate exosomes [93]. These are specialist techniques currently only available in dedicated research laboratories.
Exosomes and rejection
Gunasekaran et al. [94] showed that exosomes isolated from sera and BAL in patients with AR, but not those isolated from stable patients, expressed on their surface both donor HLA and collagen V. It has previously been demonstrated that immune responses directed against mismatched donor HLA and lung-associated self-antigens, such as collagen V, contributed to the pathogenesis of PGD and CLAD [95, 96]. These exosomes were isolated in sera from patients 3 months before AR, suggesting they may be a potential early biomarker. A recently published animal-based study drew similar conclusions [97].
Exosomes and infection
Gunasekaran et al. [98] described that LTRs suffering from respiratory viral infections had higher levels of exosomes containing self-antigens, Kα1 tubulin and collagen V compared to stable controls. Mice immunised with these exosomes developed fibrosis and small airway occlusion whereas controls did not. The same group showed that LTRs suffering from coronavirus disease 2019 had specific circulating exosomes that provoked inflammation in mice [99].
Exosomes and CLAD
In a study of 10 LTRs with CLAD, Gunasekaran et al. [94] demonstrated the presence in BAL of exosomes expressing donor HLA and lung self-antigens up to 6 months before the CLAD diagnosis. These results were corroborated by the work of Sharma et al. [100], who reached similar conclusions and detected exosomes containing increased levels of Kα1 tubulin and collagen V in plasma collected 6 and 12 months prior to the diagnosis of CLAD.
Exosomes – current limitations
Exosomes could play a role in triggering the immune responses to allo- and self-antigens that ultimately lead to CLAD and at the same time be potential early biomarkers and therapeutic targets for AR and CLAD [101, 102]. Exosome analysis in the field of LT is in its infancy, with only small preliminary studies published to date, and they remain some way from any routine clinical application.
Future biomarkers under investigation
Exhaled breath condensate analysis
Exhaled breath condensate (EBC) analysis is based on changes in the volatile organic compound (VOC) profile obtained from a patient's breath. Endogenous VOCs are gaseous biomarkers. Stress, as well as inflammation, alter their composition in EBC. Therefore, changes in VOCs might aid early, noninvasive diagnostics of graft dysfunction [103–105]. Wijbenga et al. [105] published promising data in LT, using EBC for an early detection of CLAD. Despite some intriguing results in other lung diseases, such as COPD, cystic fibrosis and lung cancer, to the best of our knowledge there is no known application of EBC in routine clinical practice [106–108].
Lymphocyte subset assays
Numerous publications have analysed different immune cell profiles and their connection to lung allograft dysfunction [109–111]. Brosseau et al. [110] demonstrated that LTRs with a frequency of CD9+ B-cells below 6.6% in peripheral blood had a significantly higher incidence of BOS. Durand et al. [112] showed that patients with a proportion of CD4+CD25highFoxP3+ T-cells higher than 2.4% at 1 and 6 months after LT had a two-fold higher risk of developing BOS. Ius et al. [113] presented results of peripheral blood measurements at the third week after LT demonstrating that increasing frequencies of CD4+CD25highCD127low T-cells were associated with better CLAD-free survival, as well as with a better graft survival. Another study by Brugière et al. [114] suggested that an increase of CD4+CD57+ Ig-like transcript 2+ T-cells over the first year after LT predicted the incidence of CLAD (HR 1.25, 95% CI 1.09–1.44). Cell subsets have been studied by flow cytometry or immunostaining of tissue sections. However, these approaches face a major limitation – an intrinsic bias determined by using prespecified surface markers. This might be aided by analysis of entire sets of transcripts by single-cell RNA sequencing [111]. Given the cost of the method and low clinical applicability, none of these approaches are yet ready for application in routine clinical practice, despite some promising results.
Telomere length analysis
Telomeres consist of nucleotide repeats that protect the ends of chromosomes during cell replication. They shorten naturally with age. Some pathogenic variants might cause excessive shortening, leading to premature cellular senescence [115]. Telomere length from the graft donor and recipient blood can be studied in the field of LT. A study by Newton et al. [115] showed that an LTR leukocyte telomere length <10th percentile of normal was associated with a higher incidence of PGD grade 3 and a shorter time to CLAD onset. Shorter telomere length has also been associated with a higher incidence of CLAD, but no relation between telomere length and AR has been observed [116]. The telomere length can be measured by quantitative PCR from genomic DNA isolated from blood leukocytes or directly from the tissue [116, 117]. In the future, age-adjusted recipient leukocyte telomere length (LTL) might help guide the intensity of an immunosuppressant regimen, as a shorter LTL may be associated with more adverse drug effects [118].
Conclusion
Current clinical monitoring of LTRs lacks sensitive and specific diagnostic tools that can reliably distinguish the different types of lung allograft injury that can occur. Considerable interest has emerged in the development and validation of new less invasive molecular biomarkers, with the aim of smarter monitoring of graft function, personalising treatment, protecting long-term functional capacity and improving overall survival. In this review, we have explored the current data on a range of molecular diagnostics in LT, including tools that are already commercially available and those which are still in the early stages of evaluation or development. The published data show that significant progress has been made towards understanding the potential use of molecular biomarkers in LT recipients, while also demonstrating their limitations. Measurement of dd-cfDNA and use of tissue transcriptomics appear to be the closest to routine clinical application.
However, all the molecular biomarkers reviewed still suffer from shortcomings in their sensitivity, specificity and clinical utility in distinguishing between different forms of lung allograft injury. These limitations may be further underestimated due to potential publication bias, as negative studies are less likely to be published. In table 1, we summarise the limitations of the current data available across the reviewed molecular markers.
Before these techniques can be adopted into routine clinical use, it is imperative that more robust data is gathered from larger numbers of LTRs in multicentric studies. Future studies should aim to adhere to the quality criteria established by the International Society for Heart and Lung Transplantation regarding the number and quality of samples when comparing these new biomarkers with classical techniques [7, 8] (see “Points for clinical practice and questions for future research”).
Since it seems unlikely that any single molecular approach in LT will be transformative in clinical utility, further studies should also explore use of combinations of different diagnostics. The clinical utility of such combinations has already been demonstrated in kidney transplantation.
At present, there is also a dearth of evidence exploring the utility of molecular diagnostics in optimising and personalising immunosuppression use in LT, which is a major unmet clinical need. Finally, the cost-effectiveness of molecular diagnostics, strongly suggested by studies in kidney transplantation, still needs to be established in the field of LT.
At present, for LTRs, molecular diagnostics cannot be used to replace the traditional radiological, physiological and histological diagnostics currently used in routine clinical practice. In the near term, selected molecular diagnostics might become useful adjuncts to the existing diagnostic approaches but are not ready to replace them in routine clinical use.
Appropriately designed studies to prospectively answer clinically relevant questions are needed to advance this field (see “Points for clinical practice and questions for future research”).
Points for clinical practice and questions for future research
Current clinical monitoring of LTRs lacks sensitive and specific diagnostic biomarkers enabling early diagnosis and reliable differentiation of the various causes of lung allograft injury.
Several new approaches have been developed. The techniques closest to routine application are first the use of dd-cfDNA followed by tissue transcriptomics.
At present, these molecular biomarkers are at best complementary to traditional clinical and more invasive monitoring approaches.
Further studies should include the following approaches and considerations:
– Randomised controlled trials that compare current standard of care with care pathways augmented by one or more molecular diagnostics with a clearly defined primary outcome such as rejection rates, use of invasive testing and long-term graft function.
– Studies using real-world data collected via clinical databases, registries or insurance data prospectively designed to generate real-world evidence.
– Studies should be multicentre and ideally international and longitudinal rather than cross-sectional and be sufficiently powered in terms of numbers of LTR included to generate meaningful results.
– Performance of new molecular diagnostics, either alone or in combination, should be compared against an agreed panel of currently used traditional diagnostic approaches to ensure comparison is made with the current “gold standard”.
– As well as graft-focused outcomes, studies aimed at immunosuppression personalisation and minimisation should consider secondary outcomes such as renal function, infections, diabetes and patient-reported outcomes of immunosuppression side-effects.
– Where possible, studies should consider the cost-effectiveness of molecular diagnostics alongside an assessment of the clinical effectiveness.
Supplementary material
Supplementary Material
Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.
Supplementary material ERR-0125-2023.SUPPLEMENT
Footnotes
Provenance: Submitted article, peer reviewed.
Conflict of interest: P. Pradère, A. Zajacova, and J. Le Pavec have no conflict of interest to declare. S. Bos has received lecture fees from Therakos (Mallinckrodt) and Jazz, and conference registration/travel support from GlaxoSmithKline and Takeda, outside the submitted work. A. Fisher has received research grant income to his institution from Chiesi Pharmaceuticals, Pfizer, Therakos (Mallinckrodt) and GlaxoSmithKline. He has received consultancy fees via his institution from Sanofi and Altavant and speaker fees from Therakos (Mallinckrodt). All of the above are for activities outside the submitted work.
Support statement: P. Pradère is supported by the Groupe Hospitalier Paris Saint Joseph as a visiting research fellow to Newcastle University. S. Bos is supported by the Paul Corris International Clinical Research Training Scholarship at Newcastle University. A. Fisher is supported by the National Institute for Health and Care Research (NIHR) Blood and Transplant Research Unit in Organ Donation and Transplantation (NIHR203332), a partnership between NHS Blood and Transplant, University of Cambridge and Newcastle University. The views expressed are those of the author(s) and not necessarily those of the NIHR, NHS Blood and Transplant or the Department of Health and Social Care.
- Received June 21, 2023.
- Accepted October 16, 2023.
- Copyright ©The authors 2023
This version is distributed under the terms of the Creative Commons Attribution Non-Commercial Licence 4.0. For commercial reproduction rights and permissions contact permissions{at}ersnet.org