OCTRainbow

Transforming greyscale OCT retinal scans into true-colour images by learning the relationship between OCT layers and colour fundus photographs.

What is this project?

Optical Coherence Tomography (OCT) produces high-resolution cross-sectional images of the retina, but they are always displayed in greyscale. Meanwhile, colour fundus photographs capture the true colour of the retinal surface but lack depth information.

OCTRainbow combines both: it uses machine learning to register each OCT B-scan slice to its corresponding location on a colour fundus photograph, learns which retinal layers produce which colours, and then renders the OCT volume in true colour.

The result is a colourised 3-D OCT cube that retains the full depth detail of OCT while showing realistic colour, potentially making it easier to identify pathology and communicate findings.

Workflow — How to use OCTRainbow

1
Upload DICOM files

Go to the Upload page. Drag-and-drop (or browse) your OCT and colour fundus DICOM files. Each file is:

  • Assigned a unique GUID filename
  • Checked for duplicates
  • Analysed for metadata (device, scan type, laterality, date)
  • De-identified (patient data hashed)
2
Build training dataset

Using the offline tools, pair matched OCT and fundus photographs from the same patient, eye, and session.

  • Match files by patient ID + laterality + date
  • Create named datasets and assign pairs to train/validation/test splits
3
Train & view results

Train the ML models offline in two stages:

  1. Layer Segmentation — a U-Net identifies the 11 standard retinal layers in each B-scan.
  2. Colour Prediction — a second network maps greyscale B-scan + layer mask + spatial position to true RGB colour.

Use the offline Colour Viewer to inspect the colourised output.

Pages at a glance

Upload

Drag-and-drop DICOM files. Each upload is de-identified, classified, and stored.

Background & key concepts

Retinal layers

The segmentation model identifies 11 layers from vitreous to choroid:

  1. ILM
  2. RNFL
  3. GCL
  4. IPL
  5. INL
  6. OPL
  7. ONL
  8. ELM
  9. IS/OS junction
  10. RPE
  11. Bruch's membrane
How colourisation works

The pipeline has three stages:

  1. Registration — the SLO (en-face) image extracted from the OCT is matched to the fundus photo using feature-based alignment (SIFT/RANSAC), mapping each B-scan to a precise fundus location.
  2. Segmentation — a U-Net with a ResNet-34 encoder segments each B-scan into retinal layers.
  3. Colour prediction — a conditional U-Net takes the greyscale scan, layer mask, and position encoding as input and outputs an RGB B-scan, supervised by registered fundus colours.
De-identification

All uploaded DICOM files are automatically de-identified. Patient names and IDs are replaced with HMAC-SHA256 hashes, dates are shifted, and identifying tags are stripped. The mapping is stored locally and never leaves the server.

Supported devices

OCTRainbow currently targets Zeiss Cirrus OCT DICOM files, including the proprietary CZM (scrambled JPEG2000) pixel encoding. Support for other manufacturers (Heidelberg, Topcon, Optovue) may be added in future.

Tips for best results

OCTRainbow — True-Colour OCT Rendering