Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

CT Metal Artifact Reduction based on Virtual Generated Artifacts Using Modified pix2pix

View through CrossRef
Abstract Background: Metal artifacts introduce challenges in image-guided diagnosis or accurate dose calculations. This study aims to reduce metal artifacts from the spinal brace by using virtual generated artifacts through convolutional neural networks and to compare the performance of this approach with two other methods, namely, linear interpolation metal artifact reduction (LIMAR) and normalized metal artifact reduction (NMAR) .Method: A total of 3,600-slice CT images of 60 vertebral metastases patients were selected. The spinal cord center was marked in each image, metal masks were added to two sides of the marker to generate artifact-insert CT images, and the CT values of the metal parts were copied to original CT images to obtain reference CT images. These images were divided into training (3,000 slices) and test (600) sets. The modified U-Net and pix2pix architecture was applied to understand the relationship between the reference and artifact-insert images. The mean absolute error (MAE), mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were calculated between the reference CT images and the predicted CT through LIMAR, NMAR, U-Net, and pix2pix. The CT values of organs from different images were compared. Radiotherapy treatment plans for vertebral metastases were designed, and dose calculation was performed. The dose distribution in different types of images was also compared.Results: The MAE values between the reference images and those images generated by LIMAR, NMAR, U-Net, and pix2pix were 15.02, 16.16, 6.12, and 6.48 HU, respectively, and the corresponding PSNR values were 15.37, 152.70, 158.93, and 65.14 dB, respectively. Pix2pix restored more texture than U-Net according to the visual comparison. The average CT values in the artifact-insert images of the liver, spleen, and left and right kidneys were all significantly higher than those of the reference images (p<0.05). The average CT values of the organs in images processed by the four methods showed no significant differences from those of the organs in the reference images. The mean dose of planned target volume in the artifact-insert images was significantly lower than that in the reference CT images. The average γ passing rate (1%, 1 mm) of the artifact-insert images was significantly lower than that of the reference images (95.9±1.4% vs. 99.2±1.4%, p<0.05).Conclusions: U-Net and pix2pix deep learning networks can remarkably reduce metal artifacts and improve critical structure visualization compared with LIMAR and NMAR according to the simulation data of artifact-insert images in the spinal brace. Pix2pix can restore more texture with the help of a discriminator. Metal artifacts increase the dose calculation uncertainty in radiotherapy. The dose calculated through images obtained by U-Net and pix2pix was identical with that calculated through reference images.
Title: CT Metal Artifact Reduction based on Virtual Generated Artifacts Using Modified pix2pix
Description:
Abstract Background: Metal artifacts introduce challenges in image-guided diagnosis or accurate dose calculations.
This study aims to reduce metal artifacts from the spinal brace by using virtual generated artifacts through convolutional neural networks and to compare the performance of this approach with two other methods, namely, linear interpolation metal artifact reduction (LIMAR) and normalized metal artifact reduction (NMAR) .
Method: A total of 3,600-slice CT images of 60 vertebral metastases patients were selected.
The spinal cord center was marked in each image, metal masks were added to two sides of the marker to generate artifact-insert CT images, and the CT values of the metal parts were copied to original CT images to obtain reference CT images.
These images were divided into training (3,000 slices) and test (600) sets.
The modified U-Net and pix2pix architecture was applied to understand the relationship between the reference and artifact-insert images.
The mean absolute error (MAE), mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were calculated between the reference CT images and the predicted CT through LIMAR, NMAR, U-Net, and pix2pix.
The CT values of organs from different images were compared.
Radiotherapy treatment plans for vertebral metastases were designed, and dose calculation was performed.
The dose distribution in different types of images was also compared.
Results: The MAE values between the reference images and those images generated by LIMAR, NMAR, U-Net, and pix2pix were 15.
02, 16.
16, 6.
12, and 6.
48 HU, respectively, and the corresponding PSNR values were 15.
37, 152.
70, 158.
93, and 65.
14 dB, respectively.
Pix2pix restored more texture than U-Net according to the visual comparison.
The average CT values in the artifact-insert images of the liver, spleen, and left and right kidneys were all significantly higher than those of the reference images (p<0.
05).
The average CT values of the organs in images processed by the four methods showed no significant differences from those of the organs in the reference images.
The mean dose of planned target volume in the artifact-insert images was significantly lower than that in the reference CT images.
The average γ passing rate (1%, 1 mm) of the artifact-insert images was significantly lower than that of the reference images (95.
9±1.
4% vs.
99.
2±1.
4%, p<0.
05).
Conclusions: U-Net and pix2pix deep learning networks can remarkably reduce metal artifacts and improve critical structure visualization compared with LIMAR and NMAR according to the simulation data of artifact-insert images in the spinal brace.
Pix2pix can restore more texture with the help of a discriminator.
Metal artifacts increase the dose calculation uncertainty in radiotherapy.
The dose calculated through images obtained by U-Net and pix2pix was identical with that calculated through reference images.

Related Results

Generation Of Dense Urban Features Using Conditional GAN
Generation Of Dense Urban Features Using Conditional GAN
Abstract This paper discusses the use of conditional Generative Adversarial Networks (GANs) to generate dense urban features in satellite images and evaluate their effectiv...
Dosimetric impact of metal artifact reduction for spinal implants in stereotactic body radiotherapy
Dosimetric impact of metal artifact reduction for spinal implants in stereotactic body radiotherapy
Abstract Background Metal artifacts due to spinal implants can affect the accuracy of dose calculation for radiotherapy. However, the dosimetric impact of metal artifact r...
VR 101
VR 101
Today we call many things “virtual.” Virtual corporations connect teams of workers located across the country. In leisure time, people form clubs based on shared interests in polit...
Defining "Virtual Community"
Defining "Virtual Community"
The rise of the Internet has spawned the prolific use of the adjective “virtual.” Both the popular press and scholarly researchers have written about virtual work, virtual teams, v...
Defining "Virtual Community"
Defining "Virtual Community"
The rise of the Internet has spawned the prolific use of the adjective “virtual.” Both the popular press and scholarly researchers have written about virtual work, virtual teams, v...
Manajemen Komunikasi Event Organizer Virtual
Manajemen Komunikasi Event Organizer Virtual
Abstact. This research is motivated by the continuity of event organizers in holding shows that cannot be done properly due to pandemic conditions and as a result they choose to be...

Back to Top