Retinal microsurgery is one of the most demanding types of surgery. The difficulty stems from the microscopic dimensions of tissue planes and blood vessels in the eye, the delicate nature of the neurosensory retina and the poor recovery of retinal function after injury. Many micron-scale maneuvers are physically not possible for many retinal surgeons due to inability to visualize the tissue planes, tremor, or insufficient dexterity. To safely perform these maneuvers, microscopes are required to view the retina. A central issue for the surgeon is the compromise between adequate illumination of retinal structures, and the risk of iatrogenic phototoxicity either from the operating microscope or endoilluminators, which are fiber-optic light sources that are placed into the vitreous cavity to provide adequate illumination of the retina during delicate maneuvers.
Given the aforementioned reasons, and the prevalence of eye diseases where such surgeries are the only form of treatment (Diabetic Retinopathy, Glaucoma, Age-Related Macular Degeneration, Retinal Detachment, etc...), we are interested in providing a road map to how a vision system for computer assisted retinal surgery may be established. That is, we would ultimately like to have a system that can take images from a microscope, infer what part of the retina we are observing, track surgical tools and guide these to locations predefined by a clinician through pre-operative data.
In this context, visual tracking of instruments is a key component of robotics assistance. The difficulty of the task and major reason why most existing strategies fail on in-vivo image sequences lies in the fact that complex and severe changes in instrument appearance are challenging to model. Below are some are results of our strategies.