-
Notifications
You must be signed in to change notification settings - Fork 24
Inconsistent Visualisation? #219
-
Hi,
I'm using image segmentation template, and modified the exp_logger so it use the trainer rather than the evaluator.
I'm doing this because I'm a total newbie on this subject and just begin to learn to use pytorch and everything, so I want to see which image is being trained, and stuff.
I'm aware it will slowdown the system, and everything, but I need it to see so I can understand it better.
code-generator/src/templates/template-vision-segmentation/main.py
Lines 130 to 141 in f137219
But I found the images on tensorboard kind of strange. Please look at the last column.
It looks like it's inverted?
The original image and mask are
| 1st Col | 3rd Col | 4th Col |
|---|---|---|
| 17 17 | 20 20 | 26 26 |
So my question is, is this by design? I mean like dropout or something?
Thanks,
Beta Was this translation helpful? Give feedback.
All reactions
Nvm, could you please forget this. It's my mistake.
I added .quantize(256, method=Image.Quantize.FASTOCTREE) on loading the mask on data.py.
I think it was because I want to show the image while debugging the code, and forgot to remove it.
def __getitem__(self, index):
img_from_file = Image.open(self.images[index]).convert("RGB")
img = np.asarray(img_from_file)
assert img is not None, f"Image at '{self.images[index]}' has a problem"
mask_from_file = Image.open(self.masks[index]).quantize(256, method=Image.Quantize.FASTOCTREE)
mask = np.asarray(mask_from_file)
assert mask is not None, f"Mask at '{self.masks[index]}' has a problem"
I'm so...
Replies: 3 comments 11 replies
-
Visualization should create a grid described here:
code-generator/src/templates/template-vision-segmentation/vis.py
Lines 50 to 58 in f137219
(削除) Looks like your 4th column image has ground truth like that. where ground truth mask is inverted. Can you check that explicitly in your data ? (削除ここまで)
The original image and mask are
@idkwid2022 why do you have GT green masks for col 1 and 3 and red for col 4 ?
Beta Was this translation helpful? Give feedback.
All reactions
-
They should be, I just made them up from images from google, so they're not mine. I can send them to you if you want.
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
OK, if you can put them on a shared drive such that I could download them it could be helpful.
Beta Was this translation helpful? Give feedback.
All reactions
-
I just upload them to github: https://github.com/idkwid2022/learning-machine-learning
Fyi, I made it by cutting images from google with image editor, and create the mask manually using QuPath.
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
Thanks, let me check them and the code from my side
Beta Was this translation helpful? Give feedback.
All reactions
-
It looks like I'm gonna have to need more time to check the method one by one. But I just test with 'suspected images' by removing the other images and mask. Please see on the next comment (I think better to show it on another comment for clarity).
I see. Can you try to visualize the mask of "4th Col" image (where you have background and class 1 inversion) using
render_maskmethod:code-generator/src/templates/template-vision-segmentation/vis.py
Lines 34 to 39 in f137219
def render_mask(mask):if isinstance(mask, np.ndarray):mask = Image.fromarray(mask)mask.putpalette(vocpallete)mask = mask.convert(mode="RGB")return maskand see if it is still correct ?
Next, you may make a batch of images with "4th Col" image and runmake_gridto see if everything is rendered as expected.The first image on my question is the result shown on tensorboard, so the visualization on vis.py are working, the grid and all, but somehow on some images, it comes out different.
So, you can confirm that
make_gridandrender_maskmethods fromvis.pyare working as expected ?
Beta Was this translation helpful? Give feedback.
All reactions
-
I suspected that the images are part of the problem, so I remove all other images except some.
And I found image 6, 12, 14, 19, 21, 26, 38 are all inverted:
On tensorboard (These are only two grids for sample, the others are also inverted):
I don't understand this. The images were all made by the same process, and all the same formats.
Beta Was this translation helpful? Give feedback.
All reactions
-
Nvm, could you please forget this. It's my mistake.
I added .quantize(256, method=Image.Quantize.FASTOCTREE) on loading the mask on data.py.
I think it was because I want to show the image while debugging the code, and forgot to remove it.
def __getitem__(self, index):
img_from_file = Image.open(self.images[index]).convert("RGB")
img = np.asarray(img_from_file)
assert img is not None, f"Image at '{self.images[index]}' has a problem"
mask_from_file = Image.open(self.masks[index]).quantize(256, method=Image.Quantize.FASTOCTREE)
mask = np.asarray(mask_from_file)
assert mask is not None, f"Mask at '{self.masks[index]}' has a problem"
I'm sorry.
Thank you for everything.
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
OK, sounds good that you could find the issue.
From my side, visualization was correct:
image
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes, of course. Now I can see the training progress with the image grids.
It's fun to see it learning, confused and getting better.
Again, thank you.
Beta Was this translation helpful? Give feedback.
All reactions
-
🎉 1