Wrong bounding boxes

#13
by aleablu - opened

Hi all, thank you team for the amazing work. Howewer I noticed that if I run the Object Detection task on the demo space of florence-2-large I'll get correct results, but if I download the sample_inference notebook and execute it on the same image as the online demo I'll get totally different results, with wrong bounding boxes that do not relate at all with the ones produced by the online demo, as an example:
TASK:

  • output from hf demo: {'bboxes': [[53.54999923706055, 0.675000011920929, 824.8499755859375, 1347.9749755859375], [61.64999771118164, 108.67500305175781, 815.8499755859375, 1110.375], [189.4499969482422, 1042.875, 709.6499633789062, 1347.9749755859375]], 'labels': ['man', 'shirt', 'trousers']}

  • output from sample_inference notebook run locally on my machine (it has an A40 gpu): {'bboxes': [[763.6499633789062, 922.7250366210938, 50.849998474121094, 1222.425048828125], [654.75, 1100.925048828125, 229.9499969482422, 276.07501220703125], [535.9500122070312, 945.6749877929688, 563.8499755859375, 1222.425048828125]], 'labels': ['man', 'shirt', 'trousers']}

The image is the same, it happens also with the sample image of the car provided in the notebook. How is it possible?!

I do not modify the notebook's code at all.

I'm getting the same problem bounding boxes are seriously wrong.. even with simple images and sample code

Microsoft org

hi, can you check if your transformers and torch version is the consistent with demo?

The bounding boxes for VLLM seem to be quite cryptic. I got a hint: Try to use the supervision library, the bounding boxes are scaled concerning the LLM, and you need to rescale them back.

I solved the issue with PaliGema, and still trying to solve the problem with Florence-2.

hi, can you check if your transformers and torch version is the consistent with demo?

@haipingwu so which version of torch and transformers are recommended?

Sign up or log in to comment