Welcome to the IKCEST
Journal
IEEE Transactions on Medical Imaging

IEEE Transactions on Medical Imaging

Archives Papers: 514
IEEE Xplore
Please choose volume & issue:
Location-Dependent Spatiotemporal Antialiasing in Photoacoustic Computed Tomography
Peng HuLei LiLihong V. Wang
Keywords:Image reconstructionSpatiotemporal phenomenaTransducersOptical filtersCutoff frequencyAcousticsOptical imagingacoustic tomographyantialiasingbiological tissuesbiomedical optical imagingbiomedical ultrasonicscomputerised tomographyimage reconstructionimage resolutionmedical image processingNyquist criterionphotoacoustic effectspatiotemporal phenomenaultrasonic transducer arraysultrasonic transducersultrasonic wavesbiological tissuesimage domainimage resolutionlocation dependencylocation-dependent spatiotemporal antialiasing methodoptical energy depositionPACTphotoacoustic computed tomography images optical absorption contrastreconstructed imagesscanning equivalentspatial aliasingspatial distributionspatial Nyquist criterionspatiotemporal analysistransducer elementsultrasonic transducer arrayultrasonic wavesPhotoacoustic computed tomographyspatial Nyquist criterionlocation-dependent spatiotemporal antialiasingTomography, X-Ray ComputedArtifactsContrast MediaSpatio-Temporal AnalysisSpectrum Analysis
Abstracts:Photoacoustic computed tomography (PACT) images optical absorption contrast by detecting ultrasonic waves induced by optical energy deposition in materials such as biological tissues. An ultrasonic transducer array or its scanning equivalent is used to detect ultrasonic waves. The spatial distribution of the transducer elements must satisfy the spatial Nyquist criterion; otherwise, spatial aliasing occurs and causes artifacts in reconstructed images. The spatial Nyquist criterion poses different requirements on the transducer elements’ distributions for different locations in the image domain, which has not been studied previously. In this research, we elaborate on the location dependency through spatiotemporal analysis and propose a location-dependent spatiotemporal antialiasing method. By applying this method to PACT in full-ring array geometry, we effectively mitigate aliasing artifacts with minimal effects on image resolution in both numerical simulations and in vivo experiments.
ENSURE: A General Approach for Unsupervised Training of Deep Image Reconstruction Algorithms
Hemant Kumar AggarwalAniket PramanikManeesh JohnMathews Jacob
Keywords:MeasurementLoss measurementTrainingImage reconstructionNoise measurementMagnetic resonance imagingWeight measurementcompressed sensingdeep learning (artificial intelligence)image denoisingimage reconstructionimage samplinginverse problemsmean square error methodsunsupervised learningclassical compressed sensingclean sampled ground-truth datadeep image reconstruction algorithmsdeep learning algorithmsdeep networksENsemble Stein's Unbiased Risk Estimate frameworkENSURE frameworkENSURE loss functionfully sampled ground-truth dataGSURE loss functionslower reconstruction timemodel-based algorithmsMR image recoverynoise-free imagesreconstruction qualityunbiased estimateunsupervised trainingUnsupervised learninginverse problemsdeep learningSUREMRIAlgorithmsProbabilityImage Processing, Computer-Assisted
Abstracts:Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein’s Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.
TMM-Nets: Transferred Multi- to Mono-Modal Generation for Lupus Retinopathy Diagnosis
Ruhan LiuTianqin WangHuating LiPing ZhangJing LiXiaokang YangDinggang ShenBin Sheng
Keywords:LesionsTransfer learningRetinopathyImage synthesisTrainingData modelsBiomedical imagingbiomedical optical imagingdata augmentationdiseaseseyefeature extractionlearning (artificial intelligence)medical image processingpatient carepatient diagnosiscurrent learning-based approachesdiagnosis-guided multito-mono modal generation networksdiagnostic data structurizationlesion-aware multiscale attention mechanismlupus retinopathy diagnosismono-modal generationmono-modal image datamono-modal image generationmultimodal datarare diseasessingle modalityTMM-Netstransfer learningtransferred multiLupus retinopathygenerating adversarial trainingUWF-FFAUWF-FPunmatched multi-modal dataHumansRare DiseasesMachine LearningDiabetic RetinopathyLupus Erythematosus, Systemic
Abstracts:Rare diseases, which are severely underrepresented in basic and clinical research, can particularly benefit from machine learning techniques. However, current learning-based approaches usually focus on either mono-modal image data or matched multi-modal data, whereas the diagnosis of rare diseases necessitates the aggregation of unstructured and unmatched multi-modal image data due to their rare and diverse nature. In this study, we therefore propose diagnosis-guided multi-to-mono modal generation networks (TMM-Nets) along with training and testing procedures. TMM-Nets can transfer data from multiple sources to a single modality for diagnostic data structurization. To demonstrate their potential in the context of rare diseases, TMM-Nets were deployed to diagnose the lupus retinopathy (LR-SLE), leveraging unmatched regular and ultra-wide-field fundus images for transfer learning. The TMM-Nets encoded the transfer learning from diabetic retinopathy to LR-SLE based on the similarity of the fundus lesions. In addition, a lesion-aware multi-scale attention mechanism was developed for clinical alerts, enabling TMM-Nets not only to inform patient care, but also to provide insights consistent with those of clinicians. An adversarial strategy was also developed to refine multi- to mono-modal image generation based on diagnostic results and the data distribution to enhance the data augmentation performance. Compared to the baseline model, the TMM-Nets showed 35.19% and 33.56% F1 score improvements on the test and external validation sets, respectively. In addition, the TMM-Nets can be used to develop diagnostic models for other rare diseases.
Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks
Liang HanHang SuZhaozheng Yin
Keywords:Image restorationMicroscopyImagingImage segmentationDeep learningOptical microscopyNeural networksbiomedical optical imagingdeconvolutiondeep learning (artificial intelligence)image restorationimage segmentationmedical image processingapproximated modelsdeep neural networkdetection taskshigh quality cell segmentation taskimage restoration procedureimaging principleinverse imaging procedurenewly derived phase contrast microscopy imaging modelnoninvasive imaging techniqueoptical principlePhase Contrast Image Restorationphase contrast microscopephase contrast microscopy imagesrestored imagessimplified computational imaging modelsPhase contrast microscopeimaging processdeep neural networkcell segmentationMicroscopy, Phase-ContrastArtifactsNeural Networks, ComputerStaining and Labeling
Abstracts:Phase contrast microscopy, as a noninvasive imaging technique, has been widely used to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle of the specifically-designed microscope, phase contrast microscopy images contain artifacts such as halo and shade-off which hinder the cell segmentation and detection tasks. Some previous works developed simplified computational imaging models for phase contrast microscopes by linear approximations and convolutions. The approximated models do not exactly reflect the imaging principle of the phase contrast microscope and accordingly the image restoration by solving the corresponding deconvolution process is not perfect. In this paper, we revisit the optical principle of the phase contrast microscope to precisely formulate its imaging model without any approximation. Based on this model, we propose an image restoration procedure by reversing this imaging model with a deep neural network, instead of mathematically deriving the inverse operator of the model which is technically impossible. Extensive experiments are conducted to demonstrate the superiority of the newly derived phase contrast microscopy imaging model and the power of the deep neural network on modeling the inverse imaging procedure. Moreover, the restored images enable that high quality cell segmentation task can be easily achieved by simply thresholding methods. Implementations of this work are publicly available at <uri>https://github.com/LiangHann/Phase-Contrast-Microscopy-Image-Restoration</uri>.
MSMFN: An Ultrasound Based Multi-Step Modality Fusion Network for Identifying the Histologic Subtypes of Metastatic Cervical Lymphadenopathy
Zheling MengYangyang ZhuWenjing PangJie TianFang NieKun Wang
Keywords:Ultrasonic imagingTask analysisLesionsClinical diagnosisFeature extractionNeckLymph nodesbiological tissuesbiomedical ultrasonicscancerdeep learning (artificial intelligence)Doppler measurementfeature extractionimage classificationlearning (artificial intelligence)medical image processingtumoursadenocarcinoma subtypesb-mode ultrasounddynamic contrast-enhanced ultrasoundencoded clinical informationhigh-level BUS semantic feature mapshistologic subtypeshistological subtypesinitiating timely therapymetastatic cervical lymphadenopathymetastatic CLAmodality heterogeneity featuresmodality informationmodality interactionmodality-specific characteristicsmultimodal fusion methodsmultimodal ultrasound fusion frameworkprimary lesionself-supervised feature orthogonalization lossstatic imaging feature vectorthree-step processultrasound based Multistep Modality Fusion NetworkDeep learningmulti-modal fusionultrasoundcervical lymphadenopathyHumansUltrasonographyElasticity Imaging TechniquesAdenocarcinomaLymphadenopathySemantics
Abstracts:Identifying squamous cell carcinoma and adenocarcinoma subtypes of metastatic cervical lymphadenopathy (CLA) is critical for localizing the primary lesion and initiating timely therapy. B-mode ultrasound (BUS), color Doppler flow imaging (CDFI), ultrasound elastography (UE) and dynamic contrast-enhanced ultrasound provide effective tools for identification but synthesis of modality information is a challenge for clinicians. Therefore, based on deep learning, rationally fusing these modalities with clinical information to personalize the classification of metastatic CLA requires new explorations. In this paper, we propose Multi-step Modality Fusion Network (MSMFN) for multi-modal ultrasound fusion to identify histological subtypes of metastatic CLA. MSMFN can mine the unique features of each modality and fuse them in a hierarchical three-step process. Specifically, first, under the guidance of high-level BUS semantic feature maps, information in CDFI and UE is extracted by modality interaction, and the static imaging feature vector is obtained. Then, a self-supervised feature orthogonalization loss is introduced to help learn modality heterogeneity features while maintaining maximal task-consistent category distinguishability of modalities. Finally, six encoded clinical information are utilized to avoid prediction bias and improve prediction ability further. Our three-fold cross-validation experiments demonstrate that our method surpasses clinicians and other multi-modal fusion methods with an accuracy of 80.06&#x0025;, a true-positive rate of 81.81&#x0025;, and a true-negative rate of 80.00&#x0025;. Our network provides a multi-modal ultrasound fusion framework that considers prior clinical knowledge and modality-specific characteristics. Our code will be available at: <uri>https://github.com/RichardSunnyMeng/MSMFN</uri>.
Ensemble Inversion for Brain Tumor Growth Models With Mass Effect
Shashank SubramanianAli GhafouriKlaudius Matthias ScheufeleNaveen HimthaniChristos DavatzikosGeorge Biros
Keywords:TumorsBrain modelingBiological system modelingMathematical modelsCalibrationIntegrated circuit modelingNumerical modelsbiomedical MRIbraincalibrationcancermedical image processingpartial differential equationstumoursbrain parenchymabrain tumor growth modelscalibrated biophysical modelcaptures mass effectensemble inversion schemeglioma tumorgrowing tumorhealthy brain anatomyhealthy precancer subject anatomymodel calibrationnormal subject brain templatesnovel biophysics-based featurespartial differential equation tumor growth modelphysics-based biomarkersprecancerous brain anatomysignificant mass effectsingle multiparametric Magnetic Resonance Imaging scansingle scalar parametersingle-scan calibration problemtumor biophysicstumor growth models-including mass effecttumor initiation sitetumor proliferationTumor growth model personalizationinverse problemmass effectglioblastomaHumansRetrospective StudiesMagnetic Resonance ImagingBrain NeoplasmsBrainGliomaGlioblastoma
Abstracts:We propose a method for extracting physics-based biomarkers from a single multiparametric Magnetic Resonance Imaging (mpMRI) scan bearing a glioma tumor. We account for mass effect, the deformation of brain parenchyma due to the growing tumor, which on its own is an important radiographic feature but its automatic quantification remains an open problem. In particular, we calibrate a partial differential equation (PDE) tumor growth model that captures mass effect, parameterized by a single scalar parameter, tumor proliferation, migration, while localizing the tumor initiation site. The single-scan calibration problem is severely ill-posed because the precancerous, healthy, brain anatomy is unknown. To address the ill-posedness, we introduce an ensemble inversion scheme that uses a number of normal subject brain templates as proxies for the healthy precancer subject anatomy. We verify our solver on a synthetic dataset and perform a retrospective analysis on a clinical dataset of 216 glioblastoma (GBM) patients. We analyze the reconstructions using our calibrated biophysical model and demonstrate that our solver provides both global and local quantitative measures of tumor biophysics and mass effect. We further highlight the improved performance in model calibration through the inclusion of mass effect in tumor growth models&#x2014;including mass effect in the model leads to 10&#x0025; increase in average dice coefficients for patients with significant mass effect. We further evaluate our model by introducing novel biophysics-based features and using them for survival analysis. Our preliminary analysis suggests that including such features can improve patient stratification and survival prediction.
ICAM-Reg: Interpretable Classification and Regression With Feature Attribution for Mapping Neurological Phenotypes in Individual Scans
Cher BassMariana da SilvaCarole SudreLogan Z. J. WilliamsHelena S. SousaPetru-Daniel TudosiuFidel Alfaro-AlmagroSean P. FitzgibbonMatthew F. GlasserStephen M. SmithEmma C Robinson
Keywords:DiseasesFeature extractionBiomedical imagingAlzheimer's diseaseImagingTrainingNeuroimagingbiomedical MRIbraincognitiondeep learning (artificial intelligence)diseasesimage classificationimage registrationmedical computingmedical image processingneurophysiologypattern classificationregression analysisbackground confoundsbrain age predictionbrain imagingdeveloping Human Connectome Projectdisease specificexplicitly disentangle class relevant featuresfeature attributiongeneral adversarial networkgenerated FA mapsgenerative deep learningICAM-regimage registrationimproved interpretabilityindividual scansinterpretable classificationmapping neurological phenotypesmedical imagingMini-Mental State Examination cognitive test score predictionpopulation-based analysesregression modulesimultaneous classificationstudying group-average effectsVAE-GANvariational autoencoderBrain imagingdeep generative modelsfeature attributionimage-to-image translationHumansNeuroimagingBrainRadionuclide ImagingConnectome
Abstracts:An important goal of medical imaging is to be able to precisely detect patterns of disease specific to individual scans; however, this is challenged in brain imaging by the degree of heterogeneity of shape and appearance. Traditional methods, based on image registration, historically fail to detect variable features of disease, as they utilise population-based analyses, suited primarily to studying group-average effects. In this paper we therefore take advantage of recent developments in generative deep learning to develop a method for simultaneous classification, or regression, and feature attribution (FA). Specifically, we explore the use of a VAE-GAN (variational autoencoder - general adversarial network) for translation called ICAM, to explicitly disentangle class relevant features, from background confounds, for improved interpretability and regression of neurological phenotypes. We validate our method on the tasks of Mini-Mental State Examination (MMSE) cognitive test score prediction for the Alzheimer&#x2019;s Disease Neuroimaging Initiative (ADNI) cohort, as well as brain age prediction, for both neurodevelopment and neurodegeneration, using the developing Human Connectome Project (dHCP) and UK Biobank datasets. We show that the generated FA maps can be used to explain outlier predictions and demonstrate that the inclusion of a regression module improves the disentanglement of the latent space. Our code is freely available on GitHub <uri>https://github.com/CherBass/ICAM</uri>.
Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework
Wei LouHaofeng LiGuanbin LiXiaoguang HanXiang Wan
Keywords:Image segmentationTrainingLabelingAnnotationsHistopathologyGenerative adversarial networksBig Datadeep learning (artificial intelligence)image segmentationmedical image processingsupervised learningunsupervised learningannotated samplesaugmented samplesconsistency-based patch selection methoddeep neural networksimage patcheslabel-efficient nuclei segmentation frameworknuclei imagesnuclei instance segmentationpathology imagessemi-supervised learning methodssemisupervised mannersingle-image GANunsupervised learning methodsNuclei segmentationsample selectionlabel-efficient learninggenerative adversarial networksCell NucleusNeural Networks, ComputerSupervised Machine Learning
Abstracts:Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&#x0026;E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5&#x0025; pixels on some benchmarks.
Semantic Decomposition Network With Contrastive and Structural Constraints for Dental Plaque Segmentation
Jian ShiBaoli SunXinchen YeZhihui WangXiaolong LuoJin LiuHeli GaoHaojie Li
Keywords:DentistryImage segmentationTeethSemanticsTask analysisMedical diagnostic imagingShapedentistryfeature extractionimage representationimage segmentationlearning (artificial intelligence)medical image processingobject detectionpatient diagnosisaccurate dental plaque segmentationcategory-specific featuresDental Plaque Segmentation datasetsemantic decomposition networksemantic-blur regionsteethDental plaque segmentationsemantic decompositioncontrastive constraintstructural constraintHumansDental PlaqueSemanticsStaining and Labeling
Abstracts:Segmenting dental plaque from images of medical reagent staining provides valuable information for diagnosis and the determination of follow-up treatment plan. However, accurate dental plaque segmentation is a challenging task that requires identifying teeth and dental plaque subjected to semantic-blur regions (i.e., confused boundaries in border regions between teeth and dental plaque) and complex variations of instance shapes, which are not fully addressed by existing methods. Therefore, we propose a semantic decomposition network (SDNet) that introduces two single-task branches to separately address the segmentation of teeth and dental plaque and designs additional constraints to learn category-specific features for each branch, thus facilitating the semantic decomposition and improving the performance of dental plaque segmentation. Specifically, SDNet learns two separate segmentation branches for teeth and dental plaque in a divide-and-conquer manner to decouple the entangled relation between them. Each branch that specifies a category tends to yield accurate segmentation. To help these two branches better focus on category-specific features, two constraint modules are further proposed: 1) contrastive constraint module (CCM) to learn discriminative feature representations by maximizing the distance between different category representations, so as to reduce the negative impact of semantic-blur regions on feature extraction; 2) structural constraint module (SCM) to provide complete structural information for dental plaque of various shapes by the supervision of an boundary-aware geometric constraint. Besides, we construct a large-scale open-source Stained Dental Plaque Segmentation dataset (SDPSeg), which provides high-quality annotations for teeth and dental plaque. Experimental results on SDPSeg datasets show SDNet achieves state-of-the-art performance.
MR Image Denoising and Super-Resolution Using Regularized Reverse Diffusion
Hyungjin ChungEun Sun LeeJong Chul Ye
Keywords:Noise reductionNoise measurementMagnetic resonance imagingMathematical modelsTrainingDiffusion processesNumerical modelsbiomedical MRIdeep learning (artificial intelligence)image denoisingimage resolutionleast mean squares methodslivermean square error methodsmedical image processingsignal denoisingaforementioned drawbacksblurred outputcomplex mixturecomplex noise distributionscoronal knee scansdenoised imagedenoising methoddiagnostic capabilitygeneral subjectmedical imaging communityMMSE denoisersMR image denoisingout-of-distribution datapatient scansrecent deep neural network-based approachesregularized reverse diffusionscore-based reverse diffusion samplingsquared error estimatessuper-resolutionusual parametric noise modelsvivo liver MRI dataDiffusion modelstochastic contractiondenoisingMRIHumansMagnetic Resonance ImagingNeural Networks, ComputerArtifacts
Abstracts:Patient scans from MRI often suffer from noise, which hampers the diagnostic capability of such images. As a method to mitigate such artifacts, denoising is largely studied both within the medical imaging community and beyond the community as a general subject. However, recent deep neural network-based approaches mostly rely on the minimum mean squared error (MMSE) estimates, which tend to produce a blurred output. Moreover, such models suffer when deployed in real-world situations: out-of-distribution data, and complex noise distributions that deviate from the usual parametric noise models. In this work, we propose a new denoising method based on score-based reverse diffusion sampling, which overcomes all the aforementioned drawbacks. Our network, trained only with coronal knee scans, excels even on out-of-distribution in vivo liver MRI data, contaminated with a complex mixture of noise. Even more, we propose a method to enhance the resolution of the denoised image with the same network. With extensive experiments, we show that our method establishes state-of-the-art performance while having desirable properties which prior MMSE denoisers did not have: flexibly choosing the extent of denoising, and quantifying uncertainty.
Hot Journals