-
Notifications
You must be signed in to change notification settings - Fork 58
-
Hello, I have tried to import my dataset through different ways but I have not been successful, could you tell me through which module or instruction I can import my own dataset? Thank you!
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 1 comment 4 replies
-
It depends on what you want to do, and the structure of your data.
Do you want to use the data to train a model, or do you want to get the predictions of a trained model on your data?
How is your data formatted? .tiff files? .png? All in the same folder, or different folders? Or on a remote server to download from?
Do you have annotations for your data? If so, what how are they formatted?
Beta Was this translation helpful? Give feedback.
All reactions
-
My idea is to use the data to train a cell detection and counting model. I currently have a folder containing training and test subfolders and subfolders containing images in .jpeg format and annotations in .json format.
Beta Was this translation helpful? Give feedback.
All reactions
-
Ok! How are the annotations encoded? As in, is it the centroid? A bounding box? Segmentations?
Beta Was this translation helpful? Give feedback.
All reactions
-
These are segmentations using COCO Annotator's polygon tool to select each of the cells I wanted to annotate within each image. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions
-
Perfect!
We don't have anything as convenient as dt.LoadFromCOCO, but I found some old code on my computer for easily loading coco datasets. I'll paste it here; hopefully it can serve as a good baseline to create your pipeline.
import deeptrack as dt import numpy as np import imageio import pycocotools.coco import matplotlib.pyplot as plt coco_json_file = r"path/to/coco/file.json" # Load COCO annotations coco = pycocotools.coco.COCO(coco_json_file) # Get all image ids image_ids = coco.getImgIds() def coco_to_segmentation_mask(coco, image_id): annotation_ids = coco.getAnnIds(imgIds=image_id) annotations = coco.loadAnns(annotation_ids) masks = [coco.annToMask(annotation) for annotation in annotations] mask = np.any(masks, axis=0) return mask def coco_to_image(coco, image_id): filename = coco.loadImgs(image_id)[0]["file_name"] root = os.path.dirname(coco_json_file) path = os.path.join(root, filename) image = imageio.imread(path) return image root = dt.Arguments( image_id = lambda: int(np.random.choice(image_ids)), ) image_loader = dt.Value(coco_to_image, coco=coco, image_id=root.image_id) mask_loader = dt.Value(coco_to_segmentation_mask, coco=coco, image_id=root.image_id) # augment image image_pipeline = image_loader >> dt.NormalizeMinMax(0, 1) >> dt.Gaussian(sigma=0.05) # Merge pipelines image_and_mask_pipeline = image_pipeline & mask_loader # Additional augmentations (Like flips, crops etc.) ... # test pipeline image_and_mask_pipeline.update() image, mask = image_and_mask_pipeline() # Plot image and mask fig, ax = plt.subplots(1, 2, figsize=(10, 5)) ax[0].imshow(image) ax[1].imshow(mask) plt.show()
This is for training a very simple segmentation UNet. You may want to alter coco_to_segmentation_mask to suit your needs.
Beta Was this translation helpful? Give feedback.