Skip to content

NGLui

Breaking Changes in 4.0.0

NGLui has undergone a major upgrade to version 4.0.0, which breaks many of the old features of version 3.x. In doing so, the API should be easier and more consisstent, with new features by focusing on the modern implementation of Neuroglancer and the bleeding edge Spelunker deployment used in many CAVE projects. See the Changelog for more details.

NGLui is a Python library to help interacting with Neuroglancer, a web-based viewer for large-scale 3D data visualization. Neuroglancer is designed to visualize large, 3d datasets, such as those found in connectomics, and nglui is designed to make it easier to generate and parse Neuroglancer states. It is particularly useful in the context of the CAVE analysis ecosystem, which provides tools for analyzing, proofreading and visualizing lage-scale connectomics data.

Installation

To get the most out of NGLui (interacting with source info, uploading skeletons, and more), we suggest installing the full version of NGLui, which includes the cloud-volume dependency:

pip install nglui[full]

You can also install a more minimal version of NGLui without the cloud-volume dependency:

pip install nglui

However, note that cloud-volume is required for some features such as uploading skeletons and getting information about sources during state generation.

Quick Usage

Building a Neuroglancer state directly

Here, let's use the Hemibrain dataset information to build a Neuroglancer state.

from nglui import statebuilder

viewer_state = (
    statebuilder.ViewerState(dimensions=[8,8,8])
    .add_image_layer(
        source='precomputed://gs://neuroglancer-janelia-flyem-hemibrain/emdata/clahe_yz/jpeg',
        name='emdata'
    )
    .add_segmentation_layer(
        source='precomputed://gs://neuroglancer-janelia-flyem-hemibrain/v1.2/segmentation',
        name='seg',
        segments=[5813034571],
    )
    .add_annotation_layer(
        source='precomputed://gs://neuroglancer-janelia-flyem-hemibrain/v1.2/synapses',
        linked_segmentation={'pre_synaptic_cell': 'seg'},
        filter_by_segmentation=True,
        color='tomato',
    )
)
viewer_state.to_link(target_url='https://hemibrain-dot-neuroglancer-demo.appspot.com')

This will return the link: Neuroglancer link.

Building a state from CAVE data

Note

Using CAVEclient with the MICrONs dataset is required for the following examples. See the MICrONs documentation for how to set up the CAVEclient and access the dataset.

Here's a quick example of how to use NGLui to generate a simple Neuroglancer state from the Microns cortical dataset.

import caveclient
from nglui import statebuilder

client = caveclient.CAVEclient('minnie65_public')

# Get a root id of a specific neuron
root_id = client.materialize.query_table(
    'nucleus_detection_v0',
    filter_equal_dict={'id': 255258}
)['pt_root_id']

statebuilder.helpers.make_neuron_neuroglancer_link(
    client,
    root_id,
    show_inputs=True,
    show_outputs=True,
)

This code will generate this link showing a neuron and its synapses.

Additional features

NGLui also has additional features such as:

  • Parser: Parse neuroglancer states to extract information about layers and annotations.
  • SegmentProperties: Easily build segment property lists from data to make segmentation views more discoverable.
  • SkeletonManager: Upload skeletons to cloud buckets and push quickly into neuroglancer (requires cloud-volume, see Installation).
  • Shaders: Support for better default shaders for neuroglancer layers.

Development

If you want to clone the repository and develop on NGLui, note that it uses uv for development and packaging, material for mkdocs for documentation, and pre-commit with ruff for code quality checks. Poe-the-poet is used to simplify repetitive tasks, and you can run poe help to see the available tasks.

Migration from older versions

If you are migrating from nglui v3.x to v4.0.0+, you will need to dramatically update your code.

First and foremost, nglui now only works with contemporary versions of neuroglancer, not the older Seung-lab version. If you still need to support the older deployment, do not upgrade.

Please read the new usage documentation! The main change is that it is now recommended to create states directly where possible, and there are now many more convenience functions. Instead of making a bunch of layer configs, now you make a ViewerState object and directly add layers and their information with functions like add_image_layer, add_segmentation_layer, and add_annotation_layer. Instead of always mapping annotation rules and data separately, you can now directly add annotation data through functions like add_points and then export with functions like to_url. You can still use the old pattern of rendering a state and mapping data with DataMap objects. A new "pipeline" pattern makes it more efficient to build complex states in a smaller number of lines of code.