Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation results not matching as per the paper #24

Open
proxymallick opened this issue Aug 9, 2023 · 4 comments
Open

Evaluation results not matching as per the paper #24

proxymallick opened this issue Aug 9, 2023 · 4 comments

Comments

@proxymallick
Copy link

proxymallick commented Aug 9, 2023

Hi,
I have a quick question related to the results shown in Table 1 and Table 2 of the paper.

  1. I trained the model exactly without any change but on a single GPU machine for exactly the number of iterations as mentioned in the log file and I am not getting any results close to the claimed results. Do you think its because of the change in the multiGPU to a single GPU run, there is a performance drop??
  2. For your information here is my result and that of the log file which you provided as per this github readme page
    My results when I run it for exacly 32k iterations.
mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
76.79 0.00 0.00 76.79 18.72 93.44 77.03 15.92 92.86

Your Result

mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
80.02 0.00 0.00 80.02 32.70 91.74 76.66 33.46 88.64

This is what I get when I run :
python tools/train_net.py --num-gpus 1 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml

  1. Also, what seed did you use? I see that CFG.SEED is set to -1 to achieve non-deterministic behaviour but each time I run detectron2 uses a randomly generated seed.
    Screenshot from 2023-08-11 08-55-21

Can you please help me out? Thank you
Regards
Prakash

@csuhan
Copy link
Owner

csuhan commented Oct 3, 2023

Hi @proxymallick , I did not test our method with one GPU, but I think the number of GPUs may affect the final results. Since your mAP is ~3% lower than ours, you may try to increase the training iters to match our close-set mAP. Besides, I think the SEED is not a critical factor.

@HanJW2000
Copy link

Hi @proxymallick ,can you solve? I train at 4 GPUs,I also cannot review the final results at papers. My mAP = 79.63

@proxymallick
Copy link
Author

Hi @csuhan Thank you very much for the reply. Yes, when I utilised 4 or 8 GPUs I could reach very close to the number and could get an mAP of 79.6. However, I could not reach 80.02. Thanks.
Also just to let you know that getting your codes to run is really hard. I had to change a few lines and versions to get it to running.

@proxymallick
Copy link
Author

proxymallick commented Dec 13, 2023

@Drios-strawberry Yes I am getting similar results on 8 GPUs i.e., mAP = 79.63

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants