Create annotations for instance segmentation using Segment Anything models

License Apache Software License 2.0 PyPI Python Version tests codecov napari hub

napari plugin for instance and semantic segmentation annotation using Segment Anything Model (SAM)

This is a plugin for napari, a multi-dimensional image viewer for Python, that allows for instance and semantic segmentation annotation. This plugin provides an easy-to-use interface for annotating images with the option to output annotations as COCO format.

This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.


To use this plugin, you'll need to install the napari multi-dimensional image viewer and the Segment Anything Model (SAM) library.

napari Installation

You can install napari using pip:

pip install "napari[all]"

Alternatively, you can install napari and all of its dependencies with conda:

conda install -c conda-forge napari

For more detailed instructions, please refer to the napari installation guide.

SAM Installation

You can install SAM from the official GitHub repository using pip:

pip install git+

Or you can install from source by cloning the repository:

git clone
cd segment-anything
pip install -e .

For more detailed instructions, please refer to the SAM installation guide.

napari-SAM4IS Installation

You can install napari-SAM4IS via pip:

pip install napari-SAM4IS

To install latest development version :

pip install git+



  1. Open an image in napari and launch the plugin. (Opening an image after launching the plugin is also possible.)
  2. Upon launching the plugin, three layers will be automatically created: SAM-Box, SAM-Predict, and Accepted. The usage of these layers will be explained later.
  3. In the widget that appears, select the model you want to use and click the load button. (The default option is recommended.)
  4. Next, select the image layer you want to annotate.
  5. Then, select whether you want to do instance segmentation or semantic segmentation. (Note that for 3D images, semantic segmentation should be chosen in the current version.)
  6. Finally, select the output layer as "shapes" for instance segmentation or "labels" for semantic segmentation. (For instance segmentation, the "Accept" layer can also be used.)


  1. Select the SAM-Box layer and use the rectangle tool to enclose the object you want to segment.
  2. An automatic segmentation mask will be created and output to the SAM-Predict layer.
  3. If you want to make adjustments, do so in the SAM-Predict layer.
  4. To accept or reject the annotation, press "a" or "r" on the keyboard, respectively.
  5. If you accept the annotation, it will be output as label 1 for semantic segmentation or converted to a polygon and output to the designated layer for instance segmentation.
  6. If you reject the annotation, the segmentation mask in the SAM-Predict layer will be discarded.
  7. After accepting or rejecting the annotation, the SAM-Predict layer will automatically reset to blank and return to the SAM-Box layer.


  1. If you have output to the labels layer, use napari's standard functionality to save the mask.
  2. If you have output to the shapes layer, you can save the shapes layer using napari's standard functionality, or you can click the "save" button to output a JSON file in COCO format for each image in the folder. (The JSON file will have the same name as the image.)


Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.


Distributed under the terms of the Apache Software License 2.0 license, "napari-SAM4IS" is free and open source software


If you encounter any problems, please file an issue along with a detailed description.


  • 0.0.6

Last updated:

  • 06 December 2023

First released:

  • 28 April 2023


Supported data:

  • Information not submitted

Plugin type:

GitHub activity:

  • Stars: 21
  • Forks: 0
  • Issues + PRs: 1

Python versions supported:

Operating system:


  • numpy
  • magicgui
  • qtpy
  • torch
  • torchvision
  • scikit-image

Sign up to receive updates