Training a Detr object detection model using Hugging Face transformers and datasets

The Hugging Face transformers library has increasingly expanded from its original focus on Natural Language Processing tasks to include more models covering a range of computer vision tasks. This blog post will look at how we can train an object detection model using the Hugging Face transformers and datasets libraries.

What is object detection?

Object detection is the task of predicting objects contained within an image.

Object detection can be helpful in several applications where you want to know not only whether a thing is in an image but where (and how many) of that thing there are. Various approaches have been developed over the years for this task, often relying on various complex hand-crafted features.

As with other areas of computer vision, there has been an increasing adoption of transformer-based solutions to this task. One model using transformers is the Detr architecture.

What is Detr?

Detr (DEtection TRansformer) is a model architecture introduced in the paper End-to-End Object Detection with Transformers. We won't dig into the architecture in massive detail in this blog since we're focused on the practical use of this model architecture in this post. One thing that is important to note here is that DETR still uses a CNN backbone. More recently, other models such as YOLOS use a transformer backbone too. Currently, however, these fully transformer-based approaches show some performance gap over more traditional techniques (because this is deep learning, 'traditional' refers to stuff from last year, of course).

Using Hugging Face for object detection

There are existing examples for using the Hugging Face transformers library and datasets with the Trainer class to do image classification. There are also example notebooks showing how to fine-tune a Detr model on custom data. However, I didn't find examples that use the datasets library and the Trainer class to manage training. Training an object detection model using datasets and the transformers library is what this blog post covers.

Why the datasets library?

You may ask why it is helpful to provide an example of using the datasets library for training an object detection model, i.e. why not use PyTorch for the data loading, which already has many examples for training object detection models?

There are a few reasons why trying to use datasets for this can be helpful. A significant one for me is the close integration between the datasets library and the Hugging Face datasets hub. Loading a dataset from the hugging face hub often involves two lines of code (including the imports).

Quickly loading a dataset and then using the same library to prepare the dataset for training an object detection model removes some friction. This becomes especially helpful when you are iterating on the process of creating training data, training a model, and creating more training data. In this iterative process, the hub can be used for storing models and datasets at each stage. Having a clear provenance of these changes (without relying on additional tools) is also a benefit of this workflow. This is the kind of pipeline hugit is intended to support (in this case, for image classification models).

Scope of this blog post

At the moment, this is mainly intended to give a quick overview of the steps involved. It isn't intended to be a proper tutorial. If I have time later, I may flesh this out (particularly if other projects I'm working on that use object detection progress further).

Enough talk, let's get started. First we install required libraries.

%%capture
!pip install datasets transformers timm wandb rich[jupyter]

I'm a big fan of the rich library so almost always have this extension loaded.

%load_ext rich

The next couple of lines gets us authenticated with the Hugging Face hub.

!git config --global credential.helper store
from huggingface_hub import notebook_login
notebook_login()
Login successful
Your token has been saved to /root/.huggingface/token

We'll use Weights and Biases for tracking our model training.

import wandb
wandb.login()
%env WANDB_PROJECT=chapbooks
%env WANDB_ENTITY=davanstrien

Loading the dataset

In this blog post will use a dataset being added to the Hugging Face datasets hub as part of the BigLAM hackathon. This dataset has a configuration for object detection and image classification, so we'll need to specify which one we want. Since the dataset doesn't define train/test/valid splits for us, we'll grab the training split. I won't provide a full description of the dataset in this blog post since the dataset is still in the process of being documented. The tl;dr summary is that the dataset includes images of digitized books with bounding boxes for illustrations.

from datasets import load_dataset

dataset = load_dataset(
    "biglam/nls_chapbook_illustrations", "illustration-detection", split="train"
)
Reusing dataset nls_chapbook_illustrations (/Users/dvanstrien/.cache/huggingface/datasets/biglam___nls_chapbook_illustrations/illustration-detection/1.0.0/75f355eb0ba564ef120939a78730eb187a4d3eb682e987ed1f682a5bea5466eb)

Let's take a look at one example from this dataset to get a sense of how the data looks

dataset[0]
{
    'image_id': 4,
    'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080 at 0x7FDD6504FAD0>,
    'width': 600,
    'height': 1080,
    'url': None,
    'date_captured': '',
    'objects': [
        {
            'category_id': 0,
            'image_id': '4',
            'id': 1,
            'area': 110901,
            'bbox': [34.529998779296875, 556.8300170898438, 401.44000244140625, 276.260009765625],
            'segmentation': [
                [
                    34.529998779296875,
                    556.8300170898438,
                    435.9700012207031,
                    556.8300170898438,
                    435.9700012207031,
                    833.0900268554688,
                    34.529998779296875,
                    833.0900268554688
                ]
            ],
            'iscrowd': False
        }
    ]
}

You will see we hav some metadata for the image, the image itself and the field objects contains the annotations themselves. Looking just at an example of the annotations:

{
    "category_id": 0,
    "image_id": "4",
    "id": 1,
    "area": 110901,
    "bbox": [
        34.529998779296875,
        556.8300170898438,
        401.44000244140625,
        276.260009765625,
    ],
    "segmentation": [
        [
            34.529998779296875,
            556.8300170898438,
            435.9700012207031,
            556.8300170898438,
            435.9700012207031,
            833.0900268554688,
            34.529998779296875,
            833.0900268554688,
        ]
    ],
    "iscrowd": False,
}

We see here, that we again have some metadata for each image. We also have a category_id and a bbox. Some of these fields should look familiar to you if you are familiar with the coco format. This will become relevant later, so don't worry if these aren't familiar to you.

One issue we can run into when training object detection models is stray bounding boxes (i.e. ones where the bounding boxes stretch beyond the edge of the image). We can check and remove these quite easily. This is some ugly code/there is probably a better way, but this is a quick check, so I'll forgive myself.

from tqdm.auto import tqdm
remove_idx = []
for idx, row in tqdm(enumerate(dataset)):
    objects_ = row["objects"]
    for ob in objects_:
        bbox = ob["bbox"]
        negative = [box for box in bbox if box < 0]
        if negative:
            remove_idx.append(idx)
len(remove_idx)
1
keep = [i for i in range(len(dataset)) if i not in remove_idx]
len(keep)
7257

The above code has given us a list of indexes to keep so we use the select method to grab those.

dataset = dataset.select(keep)

We also create a test split. If we were properly doing this we'd likely want to be a bit more thoughfull about how to do this split.

dataset = dataset.train_test_split(0.1)

Preparing the data

This section of the blog post is the bit which focuses on getting data ready for an object detection model such as detr via the datasets library. This is, therefore, also the section which will differ most from the other examples showing how to train models using PyTorch data loaders.

The Feature Extractor

If you are familiar with Hugging Face for natural language tasks, you are probably familiar with using Tokenizer_for_blah_model when pre-processing text. Often if you are using a pre-trained model, you will use AutoTokenizer.from_pretrained, passing in the ID to the model you want to fine-tune. This tokenizer then ensures that the tokenization matches the approach used for the pre-trained model.

The Feature Extractor performs a similar task. Let's look at this more closely. We'll use a pre-trained model for this example and fine-tune it. I also include commented-out code, which shows how you could use the same process with any CNN backbone. This may be useful if you have particular requirements about what backbone to use or if you have a CNN backbone that is already fine-tuned on your domain.

model_checkpoint = "facebook/detr-resnet-50"
from transformers import DetrFeatureExtractor

feature_extractor = DetrFeatureExtractor.from_pretrained(model_checkpoint)

If you wanted to use a different CNN backbone as your starting point you would instead define a config.

# from transformers import DetrConfig
# from transformers import DetrFeatureExtractor
# feature_extractor = DetrFeatureExtractor()

What does the feature extractor do?

To check what feature extractor does we can make use of the handy inspect function

from rich import inspect

inspect(feature_extractor, methods=True, dunder=True)
╭─ DetrFeatureExtractor {   "do_normalize": true,   "do_resize": true,   "feature_extractor_type": "DetrFeatureEx─╮
 def (images: Union[PIL.Image.Image, numpy.ndarray, ForwardRef('torch.Tensor'), List[PIL.Image.Image],           
 List[numpy.ndarray], List[ForwardRef('torch.Tensor')]], annotations: Union[List[Dict], List[List[Dict]]] =      
 None, return_segmentation_masks: Union[bool, NoneType] = False, masks_path: Union[pathlib.Path, NoneType] =     
 None, pad_and_return_pixel_mask: Union[bool, NoneType] = True, return_tensors: Union[str,                       
 transformers.utils.generic.TensorType, NoneType] = None, **kwargs) ->                                           
 transformers.feature_extraction_utils.BatchFeature:                                                             
                                                                                                                 
 Constructs a DETR feature extractor.                                                                            
                                                                                                                 
                _auto_class = None                                                                               
                   __dict__ = {                                                                                  
                                  '_processor_class': None,                                                      
                                  'feature_extractor_type': 'DetrFeatureExtractor',                              
                                  'format': 'coco_detection',                                                    
                                  'do_resize': True,                                                             
                                  'size': 800,                                                                   
                                  'max_size': 1333,                                                              
                                  'do_normalize': True,                                                          
                                  'image_mean': [0.485, 0.456, 0.406],                                           
                                  'image_std': [0.229, 0.224, 0.225]                                             
                              }                                                                                  
               do_normalize = True                                                                               
                  do_resize = True                                                                               
                    __doc__ = '\n    Constructs a DETR feature extractor.\n\n    This feature extractor inherits 
                              from [`FeatureExtractionMixin`] which contains most of the main methods. Users\n   
                              should refer to this superclass for more information regarding those               
                              methods.\n\n\n    Args:\n        format (`str`, *optional*, defaults to            
                              `"coco_detection"`):\n            Data format of the annotations. One of           
                              "coco_detection" or "coco_panoptic".\n        do_resize (`bool`, *optional*,       
                              defaults to `True`):\n            Whether to resize the input to a certain         
                              `size`.\n        size (`int`, *optional*, defaults to 800):\n            Resize    
                              the input to the given size. Only has an effect if `do_resize` is set to `True`.   
                              If size is a\n            sequence like `(width, height)`, output size will be     
                              matched to this. If size is an int, smaller edge of\n            the image will be 
                              matched to this number. i.e, if `height > width`, then image will be rescaled to   
                              `(size *\n            height / width, size)`.\n        max_size (`int`,            
                              *optional*, defaults to `1333`):\n            The largest size an image dimension  
                              can have (otherwise it\'s capped). Only has an effect if `do_resize` is\n          
                              set to `True`.\n        do_normalize (`bool`, *optional*, defaults to `True`):\n   
                              Whether or not to normalize the input with mean and standard deviation.\n          
                              image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`):\n             
                              The sequence of means for each channel, to be used when normalizing images.        
                              Defaults to the ImageNet mean.\n        image_std (`int`, *optional*, defaults to  
                              `[0.229, 0.224, 0.225]`):\n            The sequence of standard deviations for     
                              each channel, to be used when normalizing images. Defaults to the\n                
                              ImageNet std.\n    '                                                               
     feature_extractor_type = 'DetrFeatureExtractor'                                                             
                     format = 'coco_detection'                                                                   
                 image_mean = [0.485, 0.456, 0.406]                                                              
                  image_std = [0.229, 0.224, 0.225]                                                              
                   max_size = 1333                                                                               
          model_input_names = ['pixel_values', 'pixel_mask']                                                     
                 __module__ = 'transformers.models.detr.feature_extraction_detr'                                 
           _processor_class = None                                                                               
                       size = 800                                                                                
                __weakref__ = None                                                                               
                   __call__ = def __call__(images: Union[PIL.Image.Image, numpy.ndarray,                         
                              ForwardRef('torch.Tensor'), List[PIL.Image.Image], List[numpy.ndarray],            
                              List[ForwardRef('torch.Tensor')]], annotations: Union[List[Dict],                  
                              List[List[Dict]]] = None, return_segmentation_masks: Union[bool, NoneType] =       
                              False, masks_path: Union[pathlib.Path, NoneType] = None,                           
                              pad_and_return_pixel_mask: Union[bool, NoneType] = True, return_tensors:           
                              Union[str, transformers.utils.generic.TensorType, NoneType] = None, **kwargs) ->   
                              transformers.feature_extraction_utils.BatchFeature:                                
                              Main method to prepare for the model one or several image(s) and optional          
                              annotations. Images are by default                                                 
                              padded up to the largest image in a batch, and a pixel mask is created that        
                              indicates which pixels are                                                         
                              real/which are padding.                                                            
                center_crop = def center_crop(image, size):                                                      
                              Crops `image` to the given size using a center crop. Note that if the image is too 
                              small to be cropped to the                                                         
                              size given, it will be padded (so the returned result has the size asked).         
                  __class__ = class __class__(format='coco_detection', do_resize=True, size=800, max_size=1333,  
                              do_normalize=True, image_mean=None, image_std=None, **kwargs): Constructs a DETR   
                              feature extractor.                                                                 
  convert_coco_poly_to_mask = def convert_coco_poly_to_mask(segmentations, height, width):                       
                convert_rgb = def convert_rgb(image): Converts `PIL.Image.Image` to RGB format.                  
        _create_or_get_repo = def _create_or_get_repo(repo_path_or_name: Union[str, NoneType] = None, repo_url:  
                              Union[str, NoneType] = None, organization: Union[str, NoneType] = None, private:   
                              bool = None, use_auth_token: Union[bool, str, NoneType] = None) ->                 
                              huggingface_hub.repository.Repository:                                             
                __delattr__ = def __delattr__(name, /): Implement delattr(self, name).                           
                    __dir__ = def __dir__(): Default dir() implementation.                                       
   _ensure_format_supported = def _ensure_format_supported(image):                                               
                     __eq__ = def __eq__(value, /): Return self==value.                                          
                expand_dims = def expand_dims(image): Expands 2-dimensional `image` to 3 dimensions.             
                 __format__ = def __format__(format_spec, /): Default object formatter.                          
                  from_dict = def from_dict(feature_extractor_dict: Dict[str, Any], **kwargs) ->                 
                              ForwardRef('SequenceFeatureExtractor'):                                            
                              Instantiates a type of [`~feature_extraction_utils.FeatureExtractionMixin`] from a 
                              Python dictionary of                                                               
                              parameters.                                                                        
             from_json_file = def from_json_file(json_file: Union[str, os.PathLike]) ->                          
                              ForwardRef('SequenceFeatureExtractor'):                                            
                              Instantiates a feature extractor of type                                           
                              [`~feature_extraction_utils.FeatureExtractionMixin`] from the path to              
                              a JSON file of parameters.                                                         
            from_pretrained = def from_pretrained(pretrained_model_name_or_path: Union[str, os.PathLike],        
                              **kwargs) -> ForwardRef('SequenceFeatureExtractor'):                               
                              Instantiate a type of [`~feature_extraction_utils.FeatureExtractionMixin`] from a  
                              feature extractor, *e.g.* a                                                        
                              derived class of [`SequenceFeatureExtractor`].                                     
                     __ge__ = def __ge__(value, /): Return self>=value.                                          
 get_feature_extractor_dict = def get_feature_extractor_dict(pretrained_model_name_or_path: Union[str,           
                              os.PathLike], **kwargs) -> Tuple[Dict[str, Any], Dict[str, Any]]:                  
                              From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to  
                              be used for instantiating a                                                        
                              feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`]     
                              using `from_dict`.                                                                 
    _get_repo_url_from_name = def _get_repo_url_from_name(repo_name: str, organization: Union[str, NoneType] =   
                              None, private: bool = None, use_auth_token: Union[bool, str, NoneType] = None) ->  
                              str:                                                                               
           __getattribute__ = def __getattribute__(name, /): Return getattr(self, name).                         
                     __gt__ = def __gt__(value, /): Return self>value.                                           
                   __hash__ = def __hash__(): Return hash(self).                                                 
                   __init__ = def __init__(format='coco_detection', do_resize=True, size=800, max_size=1333,     
                              do_normalize=True, image_mean=None, image_std=None, **kwargs): Set elements of     
                              `kwargs` as attributes.                                                            
          __init_subclass__ = def __init_subclass__(...) This method is called when a class is subclassed.       
           _is_valid_format = def _is_valid_format(format):                                                      
                     __le__ = def __le__(value, /): Return self<=value.                                          
                     __lt__ = def __lt__(value, /): Return self<value.                                           
               _max_by_axis = def _max_by_axis(the_list):                                                        
                     __ne__ = def __ne__(value, /): Return self!=value.                                          
                    __new__ = def __new__(*args, **kwargs): Create and return a new object.  See help(type) for  
                              accurate signature.                                                                
                 _normalize = def _normalize(image, mean, std, target=None): Normalize the image with a certain  
                              mean and std.                                                                      
                  normalize = def normalize(image, mean, std):                                                   
                              Normalizes `image` with `mean` and `std`. Note that this will trigger a conversion 
                              of `image` to a NumPy array                                                        
                              if it's a PIL Image.                                                               
  pad_and_create_pixel_mask = def pad_and_create_pixel_mask(pixel_values_list: List[ForwardRef('torch.Tensor')], 
                              return_tensors: Union[str, transformers.utils.generic.TensorType, NoneType] =      
                              None): Pad images up to the largest image in a batch and create a corresponding    
                              `pixel_mask`.                                                                      
               post_process = def post_process(outputs, target_sizes):                                           
                              Converts the output of [`DetrForObjectDetection`] into the format expected by the  
                              COCO api. Only supports                                                            
                              PyTorch.                                                                           
      post_process_instance = def post_process_instance(results, outputs, orig_target_sizes, max_target_sizes,   
                              threshold=0.5):                                                                    
                              Converts the output of [`DetrForSegmentation`] into actual instance segmentation   
                              predictions. Only supports                                                         
                              PyTorch.                                                                           
      post_process_panoptic = def post_process_panoptic(outputs, processed_sizes, target_sizes=None,             
                              is_thing_map=None, threshold=0.85): Converts the output of [`DetrForSegmentation`] 
                              into actual panoptic predictions. Only supports PyTorch.                           
  post_process_segmentation = def post_process_segmentation(outputs, target_sizes, threshold=0.9,                
                              mask_threshold=0.5): Converts the output of [`DetrForSegmentation`] into image     
                              segmentation predictions. Only supports PyTorch.                                   
                    prepare = def prepare(image, target, return_segmentation_masks=False, masks_path=None):      
     prepare_coco_detection = def prepare_coco_detection(image, target, return_segmentation_masks=False):        
                              Convert the target in COCO format into the format expected by DETR.                
      prepare_coco_panoptic = def prepare_coco_panoptic(image, target, masks_path, return_masks=True):           
               _push_to_hub = def _push_to_hub(repo: huggingface_hub.repository.Repository, commit_message:      
                              Union[str, NoneType] = None) -> str:                                               
                push_to_hub = def push_to_hub(repo_path_or_name: Union[str, NoneType] = None, repo_url:          
                              Union[str, NoneType] = None, use_temp_dir: bool = False, commit_message:           
                              Union[str, NoneType] = None, organization: Union[str, NoneType] = None, private:   
                              Union[bool, NoneType] = None, use_auth_token: Union[bool, str, NoneType] = None,   
                              **model_card_kwargs) -> str:                                                       
                              Upload the feature extractor file to the 🤗 Model Hub while synchronizing a local  
                              clone of the repo in                                                               
                              `repo_path_or_name`.                                                               
                 __reduce__ = def __reduce__(): Helper for pickle.                                               
              __reduce_ex__ = def __reduce_ex__(protocol, /): Helper for pickle.                                 
    register_for_auto_class = def register_for_auto_class(auto_class='AutoFeatureExtractor'):                    
                              Register this class with a given auto class. This should only be used for custom   
                              feature extractors as the ones                                                     
                              in the library are already mapped with `AutoFeatureExtractor`.                     
                   __repr__ = def __repr__(): Return repr(self).                                                 
                    _resize = def _resize(image, size, target=None, max_size=None):                              
                              Resize the image to the given size. Size can be min_size (scalar) or (w, h) tuple. 
                              If size is an int, smaller                                                         
                              edge of the image will be matched to this number.                                  
                     resize = def resize(image, size, resample=2, default_to_square=True, max_size=None):        
                              Resizes `image`. Enforces conversion of input to PIL.Image.                        
            save_pretrained = def save_pretrained(save_directory: Union[str, os.PathLike], push_to_hub: bool =   
                              False, **kwargs):                                                                  
                              Save a feature_extractor object to the directory `save_directory`, so that it can  
                              be re-loaded using the                                                             
                              [`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] class method. 
       _set_processor_class = def _set_processor_class(processor_class: str): Sets processor class as an         
                              attribute.                                                                         
                __setattr__ = def __setattr__(name, value, /): Implement setattr(self, name, value).             
                 __sizeof__ = def __sizeof__(): Size of object in memory, in bytes.                              
                    __str__ = def __str__(): Return str(self).                                                   
           __subclasshook__ = def __subclasshook__(...) Abstract classes can override this to customize          
                              issubclass().                                                                      
                    to_dict = def to_dict() -> Dict[str, Any]: Serializes this instance to a Python dictionary.  
               to_json_file = def to_json_file(json_file_path: Union[str, os.PathLike]): Save this instance to a 
                              JSON file.                                                                         
             to_json_string = def to_json_string() -> str: Serializes this instance to a JSON string.            
             to_numpy_array = def to_numpy_array(image, rescale=None, channel_first=True):                       
                              Converts `image` to a numpy array. Optionally rescales it and puts the channel     
                              dimension as the first                                                             
                              dimension.                                                                         
               to_pil_image = def to_pil_image(image, rescale=None):                                             
                              Converts `image` to a PIL Image. Optionally rescales it and puts the channel       
                              dimension back as the last axis if                                                 
                              needed.                                                                            
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

The output of inspect can be pretty verbose, but I often find this a handy tool for quickly trying to work out a new library of API.

We’ll look at the most critical parts in more detail, but I’ll point out a few things; you’ll see some attributes that will probably sound familiar.

image_mean = [0.485, 0.456, 0.406]                                                              
image_std = [0.229, 0.224, 0.225]

These are the mean and standard deviation used during the model training. It’s essential when we’re doing inference or fine-tuning to replicate these, and having these all stored inside a feature_extractor means we don’t have to go poking around in papers to try and work out what these values should be.

Another thing to point out is the push_to_hub method. We can store feature_extractors in the hub just as we can store models and tokenizers. Having to track the appropriate pre-processing steps for an image manually is super annoying to do manually. Storing this as we do other model components is much simpler and helps avoid errors resulting from tracing these things by hand.

The __call__ method for the DetrFeatureExtractor is what we'll use to prepare our images before we pass them into the model, let's dig more closely into this.

inspect(
    feature_extractor.__call__,
)
╭─ <bound method DetrFeatureExtractor.__call__ of DetrFeatureExtractor {   "do_normalize": true,   "do_resize": t─╮
 def DetrFeatureExtractor.__call__(images: Union[PIL.Image.Image, numpy.ndarray, ForwardRef('torch.Tensor'),     
 List[PIL.Image.Image], List[numpy.ndarray], List[ForwardRef('torch.Tensor')]], annotations: Union[List[Dict],   
 List[List[Dict]]] = None, return_segmentation_masks: Union[bool, NoneType] = False, masks_path:                 
 Union[pathlib.Path, NoneType] = None, pad_and_return_pixel_mask: Union[bool, NoneType] = True, return_tensors:  
 Union[str, transformers.utils.generic.TensorType, NoneType] = None, **kwargs) ->                                
 transformers.feature_extraction_utils.BatchFeature:                                                             
                                                                                                                 
 Main method to prepare for the model one or several image(s) and optional annotations. Images are by default    
 padded up to the largest image in a batch, and a pixel mask is created that indicates which pixels are          
 real/which are padding.                                                                                         
                                                                                                                 
 27 attribute(s) not shown. Run inspect(inspect) for options.                                                    
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Understanding what the __call__ method expected, and how to make sure that is what's delivered by the datasets library is the key thing I needed to work out. What does it expect:

  • images: this can be a list or a single image (and stored in different formats)
  • annotations this should be of type Union[List[Dict],││ List[List[Dict]]].

The images part is not too tricky to understand. We can either pass in a single image, a NumPy array representing an image or a list of images or NumPy arrays.

The annotations part is where Python type annotations don't always do us many favours since we only know we're expecting a list of dictionaries, but we can safely assume those dictionaries probably need to have a particular format. We can try and see what happens if we pass in an image and a list of a random dictionary.

import io

import requests
from PIL import Image
im = Image.open(
    io.BytesIO(
        requests.get(
            "https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/cute-cat-photos-1593441022.jpg?crop=1.00xw:0.749xh;0,0.154xh&resize=980:*"
        ).content
    )
)
im
labels = [
    {
        "bbox": [
            0.0,
            3,
            3,
            4,
        ]
    }
]
feature_extractor(im, labels)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [23], in <cell line: 1>()
----> 1 feature_extractor(im, labels)

File /usr/local/Caskroom/miniforge/base/envs/blog/lib/python3.9/site-packages/transformers/models/detr/feature_extraction_detr.py:524, in DetrFeatureExtractor.__call__(self, images, annotations, return_segmentation_masks, masks_path, pad_and_return_pixel_mask, return_tensors, **kwargs)
    521                         valid_annotations = True
    523     if not valid_annotations:
--> 524         raise ValueError(
    525             """
    526             Annotations must of type `Dict` (single image) or `List[Dict]` (batch of images). In case of object
    527             detection, each dictionary should contain the keys 'image_id' and 'annotations', with the latter
    528             being a list of annotations in COCO format. In case of panoptic segmentation, each dictionary
    529             should contain the keys 'file_name', 'image_id' and 'segments_info', with the latter being a list
    530             of annotations in COCO format.
    531             """
    532         )
    534 # Check that masks_path has a valid type
    535 if masks_path is not None:

ValueError: 
                    Annotations must of type `Dict` (single image) or `List[Dict]` (batch of images). In case of object
                    detection, each dictionary should contain the keys 'image_id' and 'annotations', with the latter
                    being a list of annotations in COCO format. In case of panoptic segmentation, each dictionary
                    should contain the keys 'file_name', 'image_id' and 'segments_info', with the latter being a list
                    of annotations in COCO format.
                    

We can see that this raises a ValueError. We also get some more information here that gives us a clue where we went wrong. Specifically we can see that the annotations for a single image should be a Dict or List[Dict] if we're using a batch of images. We also see that we should pass in this data in the COCO format. Since our data is already in this format we should be able to pass in an example.

image = dataset["train"][0]["image"]
image