merlin.analysis package

Submodules

merlin.analysis.decode module

class merlin.analysis.decode.BarcodeSavingParallelAnalysisTask(dataSet: merlin.core.dataset.DataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.ParallelAnalysisTask

An abstract analysis class that saves barcodes into a barcode database.

get_barcode_database() → merlin.util.barcodedb.BarcodeDB[source]

Get the barcode database this analysis task saves barcodes into.

Returns: The barcode database reference.

class merlin.analysis.decode.Decode(dataSet: merlin.core.dataset.MERFISHDataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.decode.BarcodeSavingParallelAnalysisTask

An analysis task that extracts barcodes from images.

fragment_count()[source]
get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

get_codebook() → merlin.data.codebook.Codebook[source]

merlin.analysis.exportbarcodes module

class merlin.analysis.exportbarcodes.ExportBarcodes(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.AnalysisTask

An analysis task that filters barcodes based on area and mean intensity.

get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

merlin.analysis.filterbarcodes module

class merlin.analysis.filterbarcodes.FilterBarcodes(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.decode.BarcodeSavingParallelAnalysisTask

An analysis task that filters barcodes based on area and mean intensity.

fragment_count()[source]
get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

get_codebook()[source]
class merlin.analysis.filterbarcodes.GenerateAdaptiveThreshold(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.AnalysisTask

An analysis task that generates a three-dimension mean intenisty, area, minimum distance histogram for barcodes as they are decoded.

fragment_count()[source]
get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

get_blank_count_histogram() → numpy.ndarray[source]
get_coding_count_histogram() → numpy.ndarray[source]
get_total_count_histogram() → numpy.ndarray[source]
get_area_bins() → numpy.ndarray[source]
get_distance_bins() → numpy.ndarray[source]
get_intensity_bins() → numpy.ndarray[source]
get_blank_fraction_histogram() → numpy.ndarray[source]

Get the normalized blank fraction histogram indicating the normalized blank fraction for each intensity, distance, and area bin.

Returns: The normalized blank fraction histogram. The histogram

has three dimensions: mean intensity, minimum distance, and area. The bins in each dimension are defined by the bins returned by get_area_bins, get_distance_bins, and get_area_bins, respectively. Each entry indicates the number of blank barcodes divided by the number of coding barcodes within the corresponding bin normalized by the fraction of blank barcodes in the codebook. With this normalization, when all (both blank and coding) barcodes are selected with equal probability, the blank fraction is expected to be 1.

calculate_misidentification_rate_for_threshold(threshold: float) → float[source]

Calculate the misidentification rate for a specified blank fraction threshold.

Parameters

threshold – the normalized blank fraction threshold

Returns: The estimated misidentification rate, estimated as the

number of blank barcodes per blank barcode divided by the number of coding barcodes per coding barcode.

calculate_threshold_for_misidentification_rate(targetMisidentificationRate: float) → float[source]

Calculate the normalized blank fraction threshold that achieves a specified misidentification rate.

Parameters

targetMisidentificationRate – the target misidentification rate

Returns: the normalized blank fraction threshold that achieves

targetMisidentificationRate

calculate_barcode_count_for_threshold(threshold: float) → float[source]

Calculate the number of barcodes remaining after applying the specified normalized blank fraction threshold.

Parameters

threshold – the normalized blank fraction threshold

Returns: The number of barcodes passing the threshold.

extract_barcodes_with_threshold(blankThreshold: float, barcodeSet: pandas.core.frame.DataFrame) → pandas.core.frame.DataFrame[source]
class merlin.analysis.filterbarcodes.AdaptiveFilterBarcodes(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.decode.BarcodeSavingParallelAnalysisTask

An analysis task that filters barcodes based on a mean intensity threshold for each area based on the abundance of blank barcodes. The threshold is selected to achieve a specified misidentification rate.

fragment_count()[source]
get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

get_adaptive_thresholds()[source]

Get the adaptive thresholds used for filtering barcodes.

Returns: The GenerateaAdaptiveThershold task using for this

adaptive filter.

get_codebook() → merlin.data.codebook.Codebook[source]

merlin.analysis.generatemosaic module

class merlin.analysis.generatemosaic.GenerateMosaic(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.AnalysisTask

An analysis task that generates mosaic images by compiling different field of views.

get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

get_mosaic() → numpy.ndarray[source]

Get the mosaic generated by this analysis task.

Returns

a 5-dimensional array containing the mosaic. The images are arranged as [channel, zIndex, 1, x, y]. The order of the channels is as specified in the provided parameters file or in the data organization if no data channels are specified.

merlin.analysis.globalalign module

class merlin.analysis.globalalign.GlobalAlignment(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.AnalysisTask

An abstract analysis task that determines the relative position of different field of views relative to each other in order to construct a global alignment.

abstract fov_coordinates_to_global(fov: int, fovCoordinates: Tuple[float, float]) → Tuple[float, float][source]

Calculates the global coordinates based on the local coordinates in the specified field of view.

Parameters
  • fov – the fov where the coordinates are measured

  • fovCoordinates – a tuple containing the x and y coordinates or z, x, and y coordinates (in pixels) in the specified fov.

Returns

A tuple containing the global x and y coordinates or z, x, and y coordinates (in microns)

abstract global_coordinates_to_fov(fov: int, globalCoordinates: List[Tuple[float, float]]) → List[Tuple[float, float]][source]

Calculates the fov pixel coordinates for a list of global coordinates in the specified field of view.

Parameters
  • fov – the fov where the coordinates are measured

  • globalCoordinates – a list of tuples containing the x and y coordinates (in pixels) in the specified fov.

Returns

A list of tuples containing the global x and y coordinates (in microns)

abstract fov_to_global_transform(fov: int) → numpy.ndarray[source]

Calculates the transformation matrix for an affine transformation that transforms the fov coordinates to global coordinates.

Parameters

fov – the fov to calculate the transformation

Returns

a numpy array containing the transformation matrix

abstract get_global_extent() → Tuple[float, float, float, float][source]

Get the extent of the global coordinate system.

Returns

a tuple where the first two indexes correspond to the minimum and x and y extents and the last two indexes correspond to the maximum x and y extents. All are in units of microns.

class merlin.analysis.globalalign.SimpleGlobalAlignment(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.globalalign.GlobalAlignment

A global alignment that uses the theoretical stage positions in order to determine the relative positions of each field of view.

get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

fov_coordinates_to_global(fov, fovCoordinates)[source]

Calculates the global coordinates based on the local coordinates in the specified field of view.

Parameters
  • fov – the fov where the coordinates are measured

  • fovCoordinates – a tuple containing the x and y coordinates or z, x, and y coordinates (in pixels) in the specified fov.

Returns

A tuple containing the global x and y coordinates or z, x, and y coordinates (in microns)

fov_global_extent(fov: int) → List[float][source]

Returns the global extent of an fov, output interleaved as xmin, ymin, xmax, ymax

Parameters

fov – the fov of interest

Returns

a list of four floats, representing the xmin, xmax, ymin, ymax

global_coordinates_to_fov(fov, globalCoordinates)[source]

Calculates the fov pixel coordinates for a list of global coordinates in the specified field of view.

Parameters
  • fov – the fov where the coordinates are measured

  • globalCoordinates – a list of tuples containing the x and y coordinates (in pixels) in the specified fov.

Returns

A list of tuples containing the global x and y coordinates (in microns)

fov_to_global_transform(fov)[source]

Calculates the transformation matrix for an affine transformation that transforms the fov coordinates to global coordinates.

Parameters

fov – the fov to calculate the transformation

Returns

a numpy array containing the transformation matrix

get_global_extent()[source]

Get the extent of the global coordinate system.

Returns

a tuple where the first two indexes correspond to the minimum and x and y extents and the last two indexes correspond to the maximum x and y extents. All are in units of microns.

class merlin.analysis.globalalign.CorrelationGlobalAlignment(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.globalalign.GlobalAlignment

A global alignment that uses the cross-correlation between overlapping regions in order to determine the relative positions of each field of view.

get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

fov_coordinates_to_global(fov, fovCoordinates)[source]

Calculates the global coordinates based on the local coordinates in the specified field of view.

Parameters
  • fov – the fov where the coordinates are measured

  • fovCoordinates – a tuple containing the x and y coordinates or z, x, and y coordinates (in pixels) in the specified fov.

Returns

A tuple containing the global x and y coordinates or z, x, and y coordinates (in microns)

fov_to_global_transform(fov)[source]

Calculates the transformation matrix for an affine transformation that transforms the fov coordinates to global coordinates.

Parameters

fov – the fov to calculate the transformation

Returns

a numpy array containing the transformation matrix

get_global_extent()[source]

Get the extent of the global coordinate system.

Returns

a tuple where the first two indexes correspond to the minimum and x and y extents and the last two indexes correspond to the maximum x and y extents. All are in units of microns.

merlin.analysis.optimize module

class merlin.analysis.optimize.OptimizeIteration(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.decode.BarcodeSavingParallelAnalysisTask

An analysis task for performing a single iteration of scale factor optimization.

get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

fragment_count()[source]
get_codebook() → merlin.data.codebook.Codebook[source]
get_reference_color()[source]
get_chromatic_corrector() → merlin.util.aberration.ChromaticCorrector[source]

Get the chromatic corrector estimated from this optimization iteration

Returns

The chromatic corrector.

get_scale_factors() → numpy.ndarray[source]

Get the final, optimized scale factors.

Returns

a one-dimensional numpy array where the i’th entry is the scale factor corresponding to the i’th bit.

get_backgrounds() → numpy.ndarray[source]
get_scale_factor_history() → numpy.ndarray[source]

Get the scale factors cached for each iteration of the optimization.

Returns

a two-dimensional numpy array where the i,j’th entry is the scale factor corresponding to the i’th bit in the j’th iteration.

get_barcode_count_history() → numpy.ndarray[source]

Get the set of barcode counts for each iteration of the optimization.

Returns

a two-dimensional numpy array where the i,j’th entry is the barcode count corresponding to the i’th barcode in the j’th iteration.

merlin.analysis.plotperformance module

class merlin.analysis.plotperformance.PlotPerformance(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.AnalysisTask

An analysis task that generates plots depicting metrics of the MERFISH decoding.

get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

merlin.analysis.preprocess module

class merlin.analysis.preprocess.Preprocess(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.ParallelAnalysisTask

An abstract class for preparing data for barcode calling.

get_pixel_histogram(fov=None)[source]
class merlin.analysis.preprocess.DeconvolutionPreprocess(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.preprocess.Preprocess

fragment_count()[source]
get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

get_codebook() → merlin.data.codebook.Codebook[source]
get_processed_image_set(fov, zIndex: int = None, chromaticCorrector: merlin.util.aberration.ChromaticCorrector = None) → numpy.ndarray[source]
get_processed_image(fov: int, dataChannel: int, zIndex: int, chromaticCorrector: merlin.util.aberration.ChromaticCorrector = None) → numpy.ndarray[source]

merlin.analysis.segment module

merlin.analysis.warp module

class merlin.analysis.warp.Warp(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.core.analysistask.ParallelAnalysisTask

An abstract class for warping a set of images so that the corresponding pixels align between images taken in different imaging rounds.

get_aligned_image_set(fov: int, chromaticCorrector: merlin.util.aberration.ChromaticCorrector = None) → numpy.ndarray[source]

Get the set of transformed images for the specified fov.

Parameters
  • fov – index of the field of view

  • chromaticCorrector – the ChromaticCorrector to use to chromatically correct the images. If not supplied, no correction is performed.

Returns

a 4-dimensional numpy array containing the aligned images. The

images are arranged as [channel, zIndex, x, y]

get_aligned_image(fov: int, dataChannel: int, zIndex: int, chromaticCorrector: merlin.util.aberration.ChromaticCorrector = None) → numpy.ndarray[source]

Get the specified transformed image

Parameters
  • fov – index of the field of view

  • dataChannel – index of the data channel

  • zIndex – index of the z position

  • chromaticCorrector – the ChromaticCorrector to use to chromatically correct the images. If not supplied, no correction is performed.

Returns

a 2-dimensional numpy array containing the specified image

get_transformation(fov: int, dataChannel: int = None) → Union[skimage.transform._geometric.EuclideanTransform, List[skimage.transform._geometric.EuclideanTransform]][source]

Get the transformations for aligning images for the specified field of view.

Parameters
  • fov – the fov to get the transformations for.

  • dataChannel – the index of the data channel to get the transformation for. If None, then all data channels are returned.

Returns

a EuclideanTransform if dataChannel is specified or a list of

EuclideanTransforms for all dataChannels if dataChannel is not specified.

class merlin.analysis.warp.FiducialCorrelationWarp(dataSet, parameters=None, analysisName=None)[source]

Bases: merlin.analysis.warp.Warp

An analysis task that warps a set of images taken in different imaging rounds based on the crosscorrelation between fiducial images.

fragment_count()[source]
get_estimated_memory()[source]

Get an estimate of how much memory is required for this AnalysisTask.

Returns

a memory estimate in megabytes.

get_estimated_time()[source]

Get an estimate for the amount of time required to complete this AnalysisTask.

Returns

a time estimate in minutes.

get_dependencies()[source]

Get the analysis tasks that must be completed before this analysis task can proceed.

Returns

a list containing the names of the analysis tasks that

this analysis task depends on. If there are no dependencies, an empty list is returned.

Module contents