Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
When i pass the CT-Scans and the masks to the Loss Function, which is the Jaccard-Loss from the segmentation_models.pytorch library, the value does not decrease but stay in the range of 1.0-0.9 over 50 epochs training on only one batch of 32 images. As far as I have understood, my network should overfit and the loss should decrease since I am only training on one batch of a small amount of images. However this does not happen. I also tried more batches with all the data over 100 epochs, but the loss does not decrease either obviously. Does anyone have an idea what I might have done wrong? Do I have to change anything when passing the masks to my loss function?
Related posts
- Medical Image Segmentation Human Retina
- Automatic generation of image-segmentation mask pairs with StableDiffusion
- Good Brain Tumor segmentation model !?
- [D][R] Is there a standard architecture for U-Nets, pixel-to-pixel models, VAEs, and the like?
- Pytorch GPU Memory Leak Problem: Cuda Out of Memory Error !!