File size: 1,284 Bytes
57cbf0d
ad8da8c
57cbf0d
d5cac89
 
 
 
57cbf0d
d5cac89
 
 
 
 
 
 
 
 
ad8da8c
d5cac89
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
pipeline_tag: image-to-image
tags:
- HiT-SR
- image super-resolution
- transformer
- efficient transformer
---
<h1>
 HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution
 </h1>

<h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3>
<div></div>

HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens.

Paper: https://huggingface.co/papers/2407.05878

🤗 Please refer to https://huggingface.co/XiangZ/hit-sr for usage.