Int J Curr Pharm Res, Vol 10, Issue 5, 20-24Original Article


DETECTION AND SEGMENTATION OF OPTIC DISC IN FUNDUS IMAGES

RAMESH C., UDAYAKUMAR E., YOGESHWARAN K.

Department of ECE, KIT-Kalaignarkarunanidhi Institute of Technology, Coimbatore, Tamilnadu, India
Email: prateek.jain246@gmail.com

Received: 10 Jun 2018, Revised and Accepted: 07 Aug 2018


ABSTRACT

Objective: Image processing technique is utilized in the medical field widely nowadays. Hence, therefore, this technique is used to extract the different features like blood vessels, optic disk, macula, fovea etc. automatically of the retinal image of eye.

Methods: This paper presents a simple and fast algorithm using Mathematical Morphology to find the fovea of fundus retinal image. The image for analysis is obtained from the DRIVE database. Also, this paper is enhanced to detect the Diabetic Retinopathy disease occurring in the eye.

Results: Detection of optic disc boundary becomes important for the diagnosis of glaucoma. The iterative curve evolution was stopped at the image boundaries where the energy was minimum.

Conclusion: The changes in the shape and size of the optic disc can be used to detect glaucoma and also cup ratio can be used as a measure of glaucoma.

Keywords: Diabetic Retinopathy (DR), Glaucoma, Optic Disc (OD), Segmentation, Retinal image, Fovea


INTRODUCTION

Glaucoma is one of the common causes of blindness with about 79 million in the world likely to be afflicted with glaucoma by the year 2020. It is characterized by the progressive degeneration of optic nerve fibers and leads to structural changes of the optic nerve head, which is also referred to as an optic disk, the nerve fiber layer and simultaneous functional failure of the visual field. Since glaucoma [1] is asymptomatic in the early stages, and the associated vision loss cannot be restored, its early detection and subsequent treatment is essential to prevent visual damage. Glaucoma is one of the leading eye diseases that cause blindness. Early detection of glaucoma is essential to minimize the risk of visual loss of diabetic patients. A standard procedure that is used for the detection of glaucoma and other eye diseases is done by manual examination of the optic disc [2] by an ophthalmologist. The proposed work implements automatic optic disc segmentation of fundus images of the eye by using morphological operations. The exact boundary region of the optic disc is detected [3]. Depending on the shape of the boundary, one can find whether the person is affected by glaucoma or not.

An efficient detection of the optic disc in colour retinal images [4] is a significant task in an automated retinal image analysis system. Its detection is a prerequisite for the segmentation of other normal and pathological features. For instance, the measurement of varying optic disc to cup diameter ratio is used in the detection sight-threatening disease called glaucoma. The position of optic disc can be used as a reference length for measuring distances in retinal images, especially for the location of macula. In case of blood vessel tracking algorithms, the location of optic disc becomes the starting point for vessel tracking. Finally, in the case of diabetic maculopathy lesions identification, masking the false positive optic disc region leads to an improvement in the performance of lesion detection.

Diabetic Retinopathy (DR) is a chronic disease which nowadays constitutes the primary cause of blindness in people of working age in the developed world [5]. The benefits that a system for automatically detect early signs of this disease would provide have been widely studied and assessed positively. In this sense, the OD plays an important role in developing automated diagnosis expert systems for DR as its segmentation is a key pre-processing component in many algorithms designed to identify other fundus features.

Most of the works related to Glaucoma detection based on fundus images concentrate only on the Cup to Disc Ratio (CDR). CDR was found to be inconsistent sometimes to detect Glaucoma since the patients may have a severe visual loss with a small CDR as in fig. 1. Cup/disc ratio staging system does not account for disc size, and that focal narrowing of the neuroretinal rim present between the optic disc and optic cup is not adequately highlighted. So a method has been proposed to accurately detect Glaucoma based on CDR, Neuroretinal rim area to find the rim loss and textural features in order to detect pathological subjects correctly.

Fig. 1: Optic nerve drawings with identical cup/disc ratios with unequal rim width

Overview of the state of the art

The overview of this paper to detect and segment the optic disc in fundus images using morphological operations. Detection of optic disc boundary becomes important for the diagnosis of glaucoma. Image segmentation is performed by starting with an initial curve and evolving its shape by minimizing energy function represented by a level set function. Experiment is performed on both RGB image and grayscale image and found that implicit active contours provided better result with grayscale images. The performance results obtained by the proposed methodology on a huge digital retinal database indicate that simple methods, based on basic image processing techniques, seem to suffice for OD location and segmentation.

The available works related to OD processing in eye fundus colour images can be grouped into two distinct categories: location and segmentation methods. The former works focus on finding an OD pixel (generally representative of its center).

On the other hand, the latter works estimate the OD boundary. Within this category, a general distinction can be made between template-based methods (methods for obtaining OD boundary approximations) and methods based on deformable models or snakes for extracting the OD boundary as exactly as possible.

MATERIALS AND METHODS

Methods

Fundus images used in this work is captured by using the Topcon TRC50 EX mydriatic fundus camera with a 50 ° field of view at nearby eye hospitals. The image size is 1900x1600 pixels at 24 bits true color image. Doctors in the ophthalmic department of the hospital approved the images for the research purpose.

Block diagram for a proposed method

These retinal images are acquired through high sensitive color fundus camera with the illumination, resolution, field of view, magnification and dilation procedures kept constant. The basic block diagram and flowchart for detection and segmentation is given by the following fig. 2 and fig. 3.

Fig. 2: Block diagram for the proposed method, the block diagram consists of three basic steps for all image processing based research works. The steps such as pre-processing, OD segmentation and feature extraction

Fig. 3: Flowchart

RESULTS AND DISCUSSION

Input image

The retinal fundus image consists of a network of blood vessels. These vessels originate from the optic disk. Based on the database from which the image is obtained, decides the location of the optic disk. Initially, the blood vessels of the disk is found using Matlab morphological filters. Based on the adaptive mathematical morphology, the origin of optic disk is identified.

Fig. 4: Color fundus input images, a) Image contains blood vessels, b) Glaucoma affected image, c) Macular edema affected image, d) Normal image

The fovea is located at a distance of 2.5times the diameter of the optic disk from its center [6]. The different color fundus input images with different eye disease given by the following fig. 4

Pre-processing

The method following key preprocessing step, i.e., color normalization, contrast enhancement, removal noise and removal of blood vessels. This enables us to assess the accuracy accurately and its difference in our methods compared to another approach. We put our data through four preprocessing steps before commencing the detection of exudates. The retinal color in different patients is variable being strongly correlated to skin pigmentation and iris color. Thus, the first step is to normalize the color of the retinal images across the data set. We selected a retinal image as a reference and then applied histogram specification to modify the values of each image in the database such that its frequency histogram matched the reference image distribution [6].

The contrast of retinal images is not sufficient due to the intrinsic attributes of lesions and decreasing color saturation, especially in the periphery. Consequently, in the second preprocessing step, the contrast between the exudates and the retinal background is enhanced using a local contrast enhancement method to facilitate later segmentation. While the contrast enhancement improves the contrast of exudates, it may also enhance the contrast of some non-exudates background pixels. Therefore, a median filtering operation is applied in the third preprocessing step. Finally, is to choose an appropriate representation using color space definition. We have experimented with various color spaces such as RGB, YIQ, HIS, HSL, Lab and Luv color space model. The reasons for the features selection and their details are explained below.

Color normalization

One of the main obstacles for detection of retinal exudates is the wide variability in the color of the retinal image from different patients. These variations are strongly correlated to skin pigmentation and iris color. Thus, the color of exudates in some region of an image may appear dimmer than the background color of other regions. As a result, the exudates can wrongly be classified as the background. In fact, without some type of color normalization, the larger variation in the natural retinal pigmentation across the patient dataset can hinder discrimination of the relatively small variations between the different lesion types [7].

These variations are strongly correlated to skin pigmentation and iris color. Thus, the color of exudates in some region of an image may appear dimmer than the background color of other regions. As a result, the exudates can wrongly be classified as the background. In fact, without some type of color normalization, the larger variation in the natural retinal pigmentation across the patient dataset can hinder discrimination of the relatively small variations between the different lesion types. The result of color normalization is given by the following fig. 5.

Fig. 5: (a) Result of histogram equalization, (b) result of histogram specification, (c) The reference image RGB histogram, (d) low-quality image, (e) result after color normalization of (d), (f) contrast-enhanced version of (e)

Contrast enhancement

The retinal images taken at standard examinations are sometimes poorly contrasted and contain artifacts. The retinal image contrast is decreased as the distance of a pixel from the center of the image increased. Moreover, non-uniformity of illumination raises the intensity levels in some regions of an image, while other regions farther away from the optic disc may suffer from a reduction of brightness. Thus, the exudates or similar lesions in such regions are not distinguishable from the background color near the disc. The retinal image quality has a great impact on the features of retinal lesions, especially exudates.

Consequently, preprocessing techniques are necessary to improve the contrast of these images. Since histogram specification does not provide an efficient scheme, we apply local contrast enhancement (Chang We, 1998) to a transformation of the values inside small windows in the image in a way that all values are distributed around the mean and show all possible intensities. The techniques of local contrast enhancement are described below.

Given each f in the initial image and a small M×M running window w, then the image is filtered to produce the new image pixel f(i, j) Eq. 1:

And fmax and fmin are the maximum and minimum intensity values in the whole image, while<f>w and σw indicate the local window mean and standard deviation which are defined as Eq. 3.

Where, (k, l) represents the location of each pixel within window w. The size of window M should be chosen to be large enough to contain a statistically representative distribution of the local variations of pixels. On the other hand, it must be small enough not to be influenced by the gradual variation of the contrast between the retinal image center and the periphery. Here, the window size was empirically set to 69×69 for our processing, although the other values may also be appropriate. The local contrast enhancement depends on the mean and variance of the intensity values within the considered local region. The exponential function (Eq. 2) produces significant enhancement when the contrast is low (σw is small), while it provides less enhancement if the contrast is already high (σw is large). The examples of color retinal images after the contrast enhancement (fig. 4f).

Removal of noise

RGB colour retinal images are pre-processed using anisotropic diffusion filter in order to remove noise. The advantage of anisotropic diffusion is that there is no need to know about the noise pattern or power spectrum previously and also it will provide better contrast while removing the noise. The filter iteratively uses the diffusion equation in combination with information about the edges to preserve edges.

The equation for anisotropic diffusion is defined as

I div(C(*x,y,t)ΔI)=(C(x,y,t)ΔI)+ΔC. ΔI--------------4

Where div is the divergence operator, Δ is a gradient operator, c is the conduction coefficient function. Anisotropic diffusion filtering introduces a partial edge detection step into the filtering process so as to encourage intra-region smoothing and preserve the inter-region. Anisotropic diffusion is a scale space, an adaptive technique which iteratively smoothes the images as the time t increases. When the time t is considered as the scale level, and the original image is at the level 0. When the scale increases, the images become more blurred and contain more general information.

Removal of blood vessels

The blood vessels in the human retina are easily visual sable via digital fundus photography and provide an excellent window to the health of a patient affected by diseases of blood circulation such as diabetes.

Diabetic retinopathy is identifiable through lesions of the vessels such as narrowing of the arteriole walls, beading of venules into sausage-like structures and new vessel growth as an attempt to reperfuse ischemic regions. Automated quantification of these lesions would be beneficial to diabetes research and to clinical practice, particularly for eye-screening programmes for the detection of eye-disease amongst diabetic persons. The fig. 6 gives the retinal image blood vessels from the colour retinal image.

Fig. 6: Blood vessels in the retinal image

The blood vessels in the color retinal image from the fig. 4 (a) is eliminated and the fig. gives the blood vessels.

OD segmentation

Optic cup detection

Optic disc is detected using a region of interest based segmentation and the bounding rectangle enclosing the region of interest is set as 1.5 times the disc width parameter. The detection of the optic cup exactly is used to calculate the neuroretinal rim area present between the disc and cup. Unlike most of the previous methods in the literature, the proposed method differs by initial optic cup region detection followed by the erasure of blood vessels to get higher accuracy. The optic cup and disc areas usually differ in colour known as pallor. An observation of the retinal image shows that the actual cup pallor differs between different patients and even between images of the same retina due to changes in the lighting conditions. So the prior knowledge of colour intensity of the optic cup cannot be fixed. The fig. 7 gives the optic cup region in the eye.

Fig. 7: Optic cup

Localization of optic disc

The localization of the optic disc is important for two purposes. First, it serves as the baseline for finding the exact boundary of the disc. Secondly, optic disc center and diameter are used to locate the macula in the image. In a color retinal image, the optic disc belongs to the brighter parts along with some lesions. The central portion of the disc is the brightest region called optic cup, where the blood vessels and nerve fibers are absent. Applying a threshold will separate part of the optic disc and some other unconnected bright regions from the background. In this work, an optimal thresholding based on Otsu method is applied to separate brighter regions from dark background.

Selection of the initial threshold

Optimal thresholding method based on an approximation of the histogram of an image using a weighted sum of two or more probability densities with a normal distribution is used for initial thresholding of the retinal image. Histogram information derived from the source image is used to partition the brightest regions from the background. It is observed that disc appears most contrasted in the green channel compared to red and blue channels in the RGB image. Therefore, only the green channel image is used for calculating the optimal threshold. The fig. 8 shows the input green channel image and its histogram [8].

Fig. 8: Optimal thresholding of retinal image, (a) Input color retinal image, (b) Thresholded image with a number of connected regions

It can be seen that the pixels corresponding to the optic disc and the optic cup belong to the higher intensity bars in the histogram. The diameter of the optic disc is in the range of 1.8 to 2 mm. Based on the visual inference in a standard retinal image with 768 × 576 size with 20 micron/pixel resolution, this prior information is used to calculate the threshold.

Estimation of the optic disc centre

Thresholding of image results in a number of connected components such as part of the optic disc, some noise and other bright features. These connected components are candidate regions for the optic disc. The entire image is scanned to count the number of connected components. Each of the connected components in the threshold image is labelled, a total number of pixels in the component and mean spatial coordinates of each connected component is calculated. The component having the maximum number of pixels is assumed to be having the optic cup part of the disc, and it is considered to be the primary region of interest. The maximum diameter of optic disc can be of 2 mm. Therefore, in an image, if any of the components whose mean spatial coordinates are within 50 pixels distance from the mean spatial coordinates of the largest component, then they are merged with it, and new mean spatial coordinate is obtained.

Fovea segmentation

The fovea is the center most part of the macula. This tiny area is responsible for our central, sharpest vision. A healthy fovea is key for reading, watching television, driving, and other activities that require the ability to see detail. Unlike the peripheral retina, it has no blood vessels. Instead, it has a very high concentration of cones (photoreceptors responsible for colour vision), allowing us to appreciate the colour. Approximately 50% of the nerve fibers in the optic nerve carry information from the fovea while the other 50% carry information from the rest of the retina. In the human eye, the term fovea denotes the pit in the retina, which allows maximum acuity of the vision. The human fovea has a diameter of about 1 mm with a high concentration of cone photoreceptors. The centre of the fovea is the fovea–about 0.2 mm in diameter–where only cone photoreceptors are present, and there are virtually no rods. Starting at the outskirts of the fovea, however, rods gradually appear, and the absolute density of cone receptors progressively decreases. The high spatial density of cones accounts for the high visual acuity capability at the fovea. This is enhanced by the local absence of retinal blood vessels from the fovea, which, if present, would interfere with the passage of light striking the fovea cone. If an object is large and thus covering a large angle, the eyes must constantly shift their gaze to subsequently bring different portions of the image into the fovea (as in reading). Surrounding the foveal pit is the foveal rim, where the neurons displaced from the pit are located [9].

Morphological operation

There are four basic operations in the morphological image processing. They are dilation, erosion, hit, and miss.

In dilation, the structural elements of the eye is applied to all pixels of the binary image. Every time the origin of the structural element is combined with a single binary pixel, the entire structural element is wrapped and subsequent alteration of the corresponding pixels of the binary image. The results of logical addition are written into the output binary image, which was originally initialized to zero.

When using structural erosion element also passes through all pixels of the image. If at a certain position every single pixel structuring element coincides with a single pixel binary image, then the logical disjunction of the central pixel structuring element with the corresponding pixel in the output image.

The hit-and-miss operation is performed in much the same way as other morphological operators, by translating the origin of the structuring element to all points in the image and then comparing the structuring element with the underlying image pixels. If the foreground and background pixels in the structuring element exactly match foreground and background pixels in the image, then the pixel underneath the origin of the structuring element is set to be hit. If it doesn't match, then that pixel is set to be miss.

Connected component analysis

Because of the similar attributes of exudates and optic disk, the morphologically operated image has both exudates and optic disk. The optic disk is removed by connected component analysis. The connected component analysis is an image segmentation technique which group’s image pixels into components based on pixel connectivity, i.e. all pixels in a connected component share similar pixel intensity values and are in some way connected with each other. Since the optic disk occupies the maximum area in the image, by using connected component properties the maximum area is eliminated.

Feature extraction

The processed image after the removal of the maximum area (optic disk) has only exudates. This image is used for feature extraction. The statistical features like exudates area, entropy, correlation, energy, contrast, homogeneity, standard deviation, mean, skewness, and kurtosis are extracted from the image [6]. From this features, the most effective features are used for classification. The most effective features are used for classification. To classify the localized segmented image into exudates and Non-exudates, a number of features based on color and texture are extracted using Gray Level Co-occurrence Matrix. The features of the retina or fovea are extracted from the retinal image to detect the diabetic diseases.

The results referred to in this paper show that the proposed methodology offers a reliable and robust solution for OD segmentation. The performance results obtained by the proposed methodology on a huge digital retinal database indicate that simple methods, based on basic image processing techniques, seem to suffice for OD location and segmentation.

In this paper, efficient methods for the automatic segmentation of optic disc localization, boundary detection and macula localization in colour retinal images are described. Retinal images of patients at different stages of retinopathy were considered to test the robustness of the optimal iterative threshold method followed by connected component analysis in disc localization. Localization of disc is important as it has to be masked during the exudates detection and its position is used in the location of the macula. Based on the result obtained in optic disc boundary detection, it can be stated that geometric based implicit active contour models provide a better segmentation for images with weak boundaries when compared to parametric models. Shape and size changes in optic disc boundaries can be further studied for the detection of glaucoma. The detection of the macula and its region plays an important role in the severity level classification of diabetic maculopathy. Detection of all these features leads towards the development of a fully automated retinal image analysis system to aid clinicians in detecting and diagnosing retinal diseases.

Detection of optic disc boundary becomes important for the diagnosis of glaucoma [9]. Difficulty in finding the optic disc boundary is due to its highly variable appearance in retinal images. Geometric active contour model was explored to segment the optic disc boundary as classical segmentation algorithms failed to provide a good result. Image segmentation was performed by starting with an initial curve and evolving its shape by minimizing energy function represented by a level set function. The iterative curve evolution was stopped at the image boundaries where the energy was minimum. The experiment was performed on both RGB image and grayscale image and found that implicit active contours provided a better result with grayscale images.

CONCLUSION

This paper shows that the changes in the shape and size of the optic disc can be used to detect and diagnose a sight-threatening disease called glaucoma. The method has to be further improved to detect optic cup part of the disc so that changes in the disc to cup ratio can be used as a measure of glaucoma.

AUTHORS CONTRIBUTIONS

All the author have contributed equally

CONFLICT OF INTERESTS

Declared none

REFERENCES

  1. K Duraiswamy. S Kavitha, S Karthikeyan. “Neuroretinal rim quantification in fundus images to detect glaucoma”. Int J Computer Sci Network Security 2010;10:134-9.

  2. Arturo Aquino, Diego Marin, Gegundez Arias, Manuel Emilio. “Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques”; 2010.

  3. Gopal Datt Joshi. “Optic Disk and Cup Boundary Detection Using Regional Information” Proceedings of the 2010 IEEE international conference on Biomedical Imaging: from nano to Macro; 2010.

  4. MS Miri, A Mahloojifar. Retinal image analysis using curvelet transform and multi-structure elements morphology by reconstruction. Biomed Eng IEEE Transactions 2011;58:1183-92.

  5. E Udayakumar. Automatic detection of diabetic retinopathy through optic disc using morphological methods. Asian J Pharm Clin Res Innovare Sci 2017;10:28-31.

  6. S Santhi. Design and development of the smart glucose monitoring system. Int J Pharma Biosci 2017;8:631-8.

  7. Atsushi Noud, Akira Sawad, Chisako Muramats, Hiroshi Fujita, Takeshi Hara, Yuji Hatanaka. Vertical cup-to-disc ratio measurement for diagnosis of glaucoma on fundus images. Proc SPIE 2010;7624:76243C-1.

  8. Kavitha G, Pradeep Kumar AV, Prashanth C. Segmentation and grading of diabetic retinopathic exudates using error boost feature selection method. World Congress Information Communication Technologies; 2011. p. 518-23.

  9. E Udayakumar. Certain investigation on the human body using various algorithms. Australian J Basic Appl Sci 2014;8:559-64.

  10. X Jia, D Wong, F Yin, T Wong. Level-set based automatic cup-to-disc ratio determination using retinal fundus images in argali. Proc EMBC; 2008. p. 2266–9.

  11. Jaspreet Kaur, HP Sinha. Automated localization of optic disc and macula from fundus images for automatic detection; 2012.

  12. E Udayakumar. An identification f efficient vessel feature for endoscopic analysis. Res J Pharm Technol 2017;10:2633-6.