Medical Image Segmentation & Smart Annotations

Updated: Apr 28, 2021

Supervised Deep-Learning techniques are learning algorithms able to learn from annotated data serving as examples and ground truth. However, for the learning to be optimal, data annotation must be of high quality, i.e. as precise as possible and the least noisy.


In medical imaging, things are even more complex. Indeed, understanding and annotating a medical image is not easy. Only experts such as doctors or biologists can segment accurately the contours of a pathology. In addition, annotation process remains extremely time consuming and then costly.


Therefore, it is necessary to develop smart algorithms facilitating and accelerating the work of the annotator. In this article, we propose to illustrate one concrete example, dealing with segmentation of brain tumors from MRI images.


Keywords: Computer Vision, medical imaging, minimal path extraction, Fast Marching, Deep Learning


Please visit our tutorial on Github illustrating all the concepts discussed in this blog post:

https://github.com/imcohen/segment-brain-mri


Medical Computer Vision


Let us first recall some basic concepts about Computer Vision and its applications in medical imaging.


Vision is a sense that almost innately allows humans to recognize and locate objects and all kinds of forms of everyday life. Other vision tasks may turn out to be more complex requiring more time to be able to carry them out. Let us enumerate a few examples: diagnosing a medical image, making an artistic painting or even driving a vehicle.


For the machine, vision is obviously not a natural sense. Yet, using a camera and computing capacities, the machine can acquire it. Computer Vision is the science of giving to machines the ability to see, the innate and acquired human faculties through image analysis algorithms.


Medical imaging is one of the most attractive fields of application of Computer Vision, but at the same time it is one of the most complex. Indeed, reading and understanding medical images need complex rules to be defined. The variability of biological structures between patients or different pathologies further increase this difficulty. Therefore, it is crucial to develop carefully accurate algorithms.


There are many Computer Vision techniques applied to medicine. Here are a few examples:


- segmentation of biological structures (organs, cells, tissues, vessels, fibers, ...)

- quantification of the segmented structures by computing specific metrics

- image registration (movement correction, guided surgery, ...)

- diagnostic aid


In this article, we are mainly interested in the problem of image segmentation, consisting in automatically drawing the contours of a tumor imaged by MRI.


Medical annotations & supervised learning


Deep learning has experienced a real boom over the past ten years due to its many successes in solving problems such as image classification, object detection or segmentation. Before being able to predict on unknown data, convolutional neural networks (CNN) used in Computer Vision are first trained on annotated data for which the ground truth is known.


The annotation or labeling phase is therefore a major step. Annotations must be essentially accurate, elaborate, and varied:


- accurate for high quality machine learning and high algorithm performance


- elaborate, i.e. giving as much information as possible to the machine about the task to be performed (a tumor contour is more elaborate than an average location)


- varied to avoid any bias in learning and improve the algorithm's ability to generalize on unknown data


In medical imaging, image annotation presents certain constraints. Here are a few :


1. Need for an expert :


Only a doctor, a biologist or a trained person has the knowledge and the capacity to read a medical image and therefore to add a manual annotation to it. Outsourcing such a task is therefore difficult and risky.


2. The annotation is tedious and time consuming :


Deep neural networks must ingest several thousand or even tens of thousands of images before they can predict effectively. It is therefore necessary to hold a large amount of data, which is sometimes difficult to obtain in the medical field, especially in pathological imaging. Also, time and patience are needed to carry out the annotation of these huge sets of images.


3. Complexity and variability of medical images :


Annotating the contour of certain biological structures can sometimes be extremely laborious. Depending on the resolution of the images or the number of entities to be segmented (cells for example), the task can be tedious.


Smart annotations


Hence the need to use smart tools capable of helping the annotator. Traditional Computer Vision algorithms that are unsupervised or based on mathematical models (differential equations, morphological operators, filters, etc.) can be of great help. They do not require annotated data to be implemented. They are often semi-automatic, the user being able to interact with the interface in order to initiate the work of the algorithm, for example by clicking on a few points of the image.


The automation of the annotation thus allows to accelerate its realization. It is also the opportunity to design with the doctors adapted metrics that meet their criteria for reading images and which can also help the machine to learn more easily from the data.


Minimal path extraction algorithms such as Fast-Marching algorithm [1] are a very good example illustrating the principle of semi-automatic annotation. To automatically draw the contours of a tumor on an image acquired by MRI [2] (see figure 1), the user just has to click on a few points around the tumor so that the algorithm can then extract the minimal paths between each pair of points. The notion of minimal path is defined by the metric or the energy of the image. In our case, in order to extract contours, the energy must be proportional to the image gradients to favor the areas of the image with strong gradients.


Figure 1 : Semi-automatic annotation of a brain tumor. Left: the propagation energy of the geodesic front proportional to image gradients. Middle: the points clicked by the user are in yellow and the curve extracted by the algorithm in red. Right: superposition of the extracted curve (in red) and the ground truth (in blue) on the original MRI image [2].


U-Net for medical images segmentation


The smart annotation method presented above is an efficient way to prepare a labeled image set for segmentation. A U-Net type convolutional neural network can then be trained to segment a tumor image from these annotated examples.


U-Net architecture [3] is well known for biological images segmentation. The network consists of an "encoder" part performing the extraction of local image features, then a "decoder" part to get back to the initial image resolution and produce a probability mapping of the image pixels. The highest probabilities correspond to tumoral parts of the image. The prediction is then compared to the annotation via the computation of a cost (loss) function. This comparison with the ground truth is finally back propagated within the network using stochastic gradient descent on the weights of the network (see figure 2).


Figure 2: U-Net convolutional neural network training process [3] for bio-medical image segmentation.



Tutorial


Please visit our tutorial on Github illustrating all the concepts discussed in this blog post:

https://github.com/imcohen/segment-brain-mri



References


[1] Peyré, G., Péchaud, M., Keriven, R., & Cohen, L. D. (2010). Geodesic methods in computer vision and graphics. Now publishers Inc.


[2] Buda, M., Saha, A., & Mazurowski, M. A. (2019). Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm. Computers in biology and medicine, 109, 218-225.

https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation


[3] Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.


184 views0 comments