Kernel-mask Knowledge Distillation (KKD)
- (2022/05/20) KKDnet
In this paper, we proposed a novel distillation method for efficient and accurate arbitrary-shaped text detection, termed Kernel-mask Knowledge Distillation (KKD), to improve the efficiency and accuracy of text detection.
The overall network architecture diagram will be uploaded after publication
First, clone the repository locally:
git clone https://github.com/giganticpower/KKDnet.git
required:
PyTorch 1.1.0+
torchvision 0.3.0+
pip install -r requirement.txt
compile codes of post-processing:
sh ./compile.sh
Please refer to dataset/README.md for dataset preparation.
first stage: we train the teacher network
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py ${CONFIG_FILE}
For example:
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py config/kkd/r50_ctw.py
second stage: knowledge distillation -> training the student network
CUDA_VISIBLE_DEVICES=0,1,2,3 python knowledge_distillation_train.py
NOTE: in knowledge_distillation_train.py, we should chose the dataloader in the code 162 - 184 lines
one checkpoint testing:
python test.py ${CONFIG_FILE} ${CHECKPOINT_FILE}
For example:
python test.py config/kkd/r18_ctw.py checkpoints/checkpoint.pth.tar
batch checkpoint testing:
python batch_eval.py ${CONFIG_FILE}
For example:
python batch_eval.py config/kkd/r18_ctw.py
note:
1.when batch checkpoint testing, you should change the checkpoint path in batch_eval.py
2.you should change the eval dataset in test.py 151-160 lines. (Here's a quick prediction link we've designed for convenience)
Please cite the related works in your publications if it helps your research:
Stay tuned
This project is developed and maintained by [Fuzhou University].