Javascript must be enabled to continue!
HADT : Image Super-Resolution Restoration Using Hybrid Attention-Dense Connected Transformer Networks
View through CrossRef
Abstract
Image super-resolution (SR) plays a vital role in vision tasks, in which Transformer-based methods outperform conventional convolutional neural networks. Existing work usually uses residual linking to improve the performance, but this type of linking provides limited information transfer within the block. Also, in order to improve feature extraction, existing work usually restricts the self-attention computation to a single window. This means that transformer-based networks can only use feature information within a limited spatial range. To handle the challenge, this paper proposes a novel Hybrid Attention-Dense Connected Transformer Networks (HADT) to better utilise the potential feature information. HADT is constructed by stacking attentional transformer block (ATB), which contains Effective Dense Transformer Block (EDTB) and Hybrid Attention Block (HAB). EDTB combines dense connectivity and swin-transformer to enhance feature transfer and improve model representation, and meanwhile, HAB is used for cross-window information interaction and joint modelling of features for better visualisation. Based on the experiments, our method is effective on SR tasks with magnification factors of 2, 3, and 4. For example, using the Urban100 dataset in an experiment with an amplification factor of 4 our method has a PSNR value that is 0.15 dB higher than the previous method and reconstructs a more detailed texture.
Title: HADT : Image Super-Resolution Restoration Using Hybrid Attention-Dense Connected Transformer Networks
Description:
Abstract
Image super-resolution (SR) plays a vital role in vision tasks, in which Transformer-based methods outperform conventional convolutional neural networks.
Existing work usually uses residual linking to improve the performance, but this type of linking provides limited information transfer within the block.
Also, in order to improve feature extraction, existing work usually restricts the self-attention computation to a single window.
This means that transformer-based networks can only use feature information within a limited spatial range.
To handle the challenge, this paper proposes a novel Hybrid Attention-Dense Connected Transformer Networks (HADT) to better utilise the potential feature information.
HADT is constructed by stacking attentional transformer block (ATB), which contains Effective Dense Transformer Block (EDTB) and Hybrid Attention Block (HAB).
EDTB combines dense connectivity and swin-transformer to enhance feature transfer and improve model representation, and meanwhile, HAB is used for cross-window information interaction and joint modelling of features for better visualisation.
Based on the experiments, our method is effective on SR tasks with magnification factors of 2, 3, and 4.
For example, using the Urban100 dataset in an experiment with an amplification factor of 4 our method has a PSNR value that is 0.
15 dB higher than the previous method and reconstructs a more detailed texture.
Related Results
Automatic Load Sharing of Transformer
Automatic Load Sharing of Transformer
Transformer plays a major role in the power system. It works 24 hours a day and provides power to the load. The transformer is excessive full, its windings are overheated which lea...
[RETRACTED] ACV Super Slim Gummies Reviews Scam Or Legit Updated 2022 – Must-See Worth Buying? v1
[RETRACTED] ACV Super Slim Gummies Reviews Scam Or Legit Updated 2022 – Must-See Worth Buying? v1
[RETRACTED]➪ACV Super Slim Gummies - Official Website Link - Click Here To Buy❤️ ✪Product Name ➯ ACV Super Slim Gummies UK✪Main Benefits ➯ Can help you with all your overweight i...
[RETRACTED] ACV Super Slim Gummies Reviews Scam Or Legit Updated 2022 – Must-See Worth Buying? v1
[RETRACTED] ACV Super Slim Gummies Reviews Scam Or Legit Updated 2022 – Must-See Worth Buying? v1
[RETRACTED]➪ACV Super Slim Gummies - Official Website Link - Click Here To Buy❤️ ✪Product Name ➯ ACV Super Slim Gummies UK✪Main Benefits ➯ Can help you with all your overweight i...
Quantitative nanoscale imaging of synaptic protein organization
Quantitative nanoscale imaging of synaptic protein organization
The arrival of super-resolution techniques has driven researchers to explore biological areas that were unreachable before. Such techniques not only allowed the improvement of spat...
Enhancing Real-Time Video Processing With Artificial Intelligence: Overcoming Resolution Loss, Motion Artifacts, And Temporal Inconsistencies
Enhancing Real-Time Video Processing With Artificial Intelligence: Overcoming Resolution Loss, Motion Artifacts, And Temporal Inconsistencies
Purpose: Traditional video processing techniques often struggle with critical challenges such as low resolution, motion artifacts, and temporal inconsistencies, especially in real-...
SVTSR: Image Super-Resolution Using Scattering Vision Transformer
SVTSR: Image Super-Resolution Using Scattering Vision Transformer
Abstract
Vision transformers have garnered substantial attention and attained impressive performance in image super-resolution tasks. Nevertheless, these networks face chal...
ANALISIS PENGARUH MASA OPERASIONAL TERHADAP PENURUNAN KAPASITAS TRANSFORMATOR DISTRIBUSI DI PT PLN (PERSERO)
ANALISIS PENGARUH MASA OPERASIONAL TERHADAP PENURUNAN KAPASITAS TRANSFORMATOR DISTRIBUSI DI PT PLN (PERSERO)
One cause the interruption of transformer is loading that exceeds the capabilities of the transformer. The state of continuous overload will affect the age of the transformer and r...
Transfer learning for Antarctic bed topography super-resolution
Transfer learning for Antarctic bed topography super-resolution
High-fidelity maps of Antarctica's subglacial bed topography constitute a critical input into a range of cryospheric models. For instance, ice flow models, which inform high-stakes...

