CSBC/PS-ON Image Analysis Hackathon 2022
This is a formal application to participate in the CSBC/PS-ON Image Analysis Hackathon that will take place virtually during Feb. 15 - Feb. 18, 2022. For questions and comments, please contact artem_sokolov@hms.harvard.edu and darren.tyson@vanderbilt.edu.
Sign in to Google to save your progress. Learn more
Email *
Name *
Institution (university, research institute, company, etc.) *
Career Stage *
Consortium/Network *
Required
What is your overall level of experience with image processing? *
No experience at all
Part of my daily job
What programming languages do you feel comfortable with? *
Required
Do you have access to a CUDA-compatible GPU? *
Challenge Descriptions
Please read a brief description of each challenge and rate your level of interest for each. Note that for a challenge to be included at the hackathon there must be sufficient interest!

1. Automated Detection of Microscopy Artifacts
Multiplex images of tissue contain information on the gene expression, morphology, and spatial distribution of individual cells comprising biologically specialized niches. However, accurate extraction of cell-level features from pixel-level data is hindered by the presence of microscopy artifacts. Manual curation of noisy cell segmentation instances scales poorly with increasing dataset size, and methods capable of automated artifact detection are needed to enhance workflow efficiency, minimize curator burden, and mitigate human bias. Challenge participants will be asked to draw on classical and/or machine learning methods to develop probabilistic classifiers for automated detection of microscopy artifacts in images of tissue.

Schematic for Challenge 1: Automated Detection of Microscopy Artifacts
2. Towards artefact-robust cell segmentation models
Segmentation models can produce ambiguous and unexpected results when low-quality images are used for training. Existing methods based on machine learning techniques are trained on multiple object classes to delineate foreground signals (i.e. cells) from background signals (i.e. areas lacking biological sample). Although existing models have extensive training data on manually curated foreground objects (i.e. nuclei/cells), they lack quality control (QC) annotations for artefacts, which results in the aberrant classification of spurious imaging aberrations as cells. Challenge participants will use curated QC ground truth annotations to train cell segmentation models that are robust against visual artefacts in multiplex images of normal and diseased human tissue.

2. Towards artefact-robust cell segmentation models
3. Virtual IF staining for 3D reconstruction and label-free virtual IF staining

Despite the additional insight gathered by measuring the tumor microenvironment using whole slide imaging or in 3D, it can be prohibitively expensive and time consuming to process tens or hundreds of tissue sections with multiplex tissue imaging platforms, such as cyclic immunofluorescence (CyCIF). In addition, some tasks, such as spatially targeted -omics acquisitions, can benefit from a map of IF markers position, but CyCIF and other stains may produce irreversible molecular changes. Mapping IF markers with label-free microscopy can enable targeted experiments or single section analysis of molecular data. Challenge participants will develop approaches to learn multi-modal image translation to reconstruct 3D CyCIF representations and to use label-free microscopy to virtually stain 2D sections.


3. Virtual IF staining for 3D reconstruction and label-free virtual IF staining
3. Virtual IF staining for 3D reconstruction and label-free virtual IF staining
4. Variational Autoencoder for single cell image feature extraction
Current analysis for multiplexed tissue imaging relies on mean intensity features computed across markers, but we know that it is not the optimal representation. Membrane markers, nuclei markers and cellular markers all have subcellular localization patterns that may not be properly captured by mean intensities. This raises the question of how to represent single cell multiplexed images in vector space in a way that accurately captures all available information at subcellular resolution. Challenge participants will develop novel Variational AutoEncoder (VAE) approaches to extract biological meaningful features and compare them to traditional mean intensity features.
4. Variational Autoencoder for single cell image feature extraction
5. Detect and correct spatial cross-talk
To take full advantage of the basic science and clinical potential of multiplexed imaging technologies, various challenges, such as cell segmentation and cellular feature extraction, must first be addressed. However, the uncertainty in cell boundaries and technical noise across protein markers in the image can cause inaccurate cell segmentation in dense tissue and create conditions in which signals from adjacent cells spatially “bleed” into each other. This leads to nonsensical cell states as determined by unsupervised clustering methods. Recent efforts have led to the development of a novel spatial cross-talk correction method called REinforcement Dynamic Spillover EliminAtion (REDSEA, PMID: 34295327). Challenge participants will evaluate this method on datasets exhibiting lateral spillover and benchmark it against alternative methods

5. Detect and correct spatial cross-talk
6. Visual Comparison of Single-Cell Clustering Results
Clustering cells based on common marker expression levels is a crucial step to identify cell types, especially when no ground-truth information is available. However, clustering results highly depend on the applied aggregation strategy and on various preprocessing such as transformations and normalization as well as input parameters. Without ground-truth information, the result is hard to judge based on statistical summary measures only. Integrating biomedical experts for quality control and comparison of results can thus be helpful to verify outcomes and make decisions on which algorithm and settings perform the best. Challenge participants will design and implement an interactive visual interface to visually compare clustering results, by making use of visual comparison techniques such as juxtaposition with small multiples, explicit encoding in order to show average outcomes, similarities and differences, and a range of statistical measures to visually communicate computed clustering qualities.

7. Enabling Image Analysis for TB-scale data
Modern highly-multiplexed imaging methods are capable of producing TB-scale datasets, with individual images requiring dozens of GB of storage and extensive resources for processing. The scale of today’s images poses substantial challenges for applying existing methods that may have been developed and prototyped on smaller-scale data. As image sizes keep increasing, a shift towards more efficient implementations may be required. Challenge participants will work to optimize an existing image processing method for both runtime and RAM usage. Possible approaches may include standard profiling techniques, porting resource-intensive computation to a lower-level language (e.g., C++), enabling GPU utilization and/or parallel processing, or improving algorithmic complexity.

8. Unsupervised thumbnail generation for whole-slide multiplexed microscopy images
Image thumbnails provide users with rapid contextual information on imaging data in a small space. They also support the use of visual memory to recall individual interesting images from a large collection. Thumbnail generation strategies for brightfield (photographic) images are straightforward, but for highly multiplexed images with many channels and high dynamic range it is not immediately apparent how to optimally reduce the available information down to a small RGB image. Challenge participants will develop an approach to transform microscopy images in OME-TIFF format into thumbnail images stored as 300x300-pixel JPEG files. Input images will be as large as 50,000 pixels in the X and Y dimension and contain up to 40 discrete channels of 16-bit or 8-bit integer pixel data.

9. End-to-End Image Processing and Analysis with Galaxy
As the multiplex tissue imaging field continues to grow, modern computational infrastructure is required for the field to meet key data analysis needs. These needs include scalable and standardized workflows to process large datasets, automated and reproducible analyses for harmonized and comparing results, and a graphical user interface that makes these analyses accessible to all scientists regardless of their informatics expertise. To meet these needs, we have developed a tool suite for end-to-end multiplex tissue imaging for the Galaxy computational workbench (https://galaxyproject.org). The Galaxy community is expanding its set of imaging algorithms and methods, but integration and development of new tools, methods, workflows is still needed. Challenge participants will “choose their own adventure” from one or more of the following: i) bring your algorithm/method (or choose from a predefined list) and integrate it into Galaxy; ii) bring your own data and process it with the Galaxy tool suite; and iii) build your own Galaxy workflow to suit the data type/analysis you need.

10. 3D Volume Visualization through Neuroglancer
Recently, wide field microscopes are increasingly used to collect optical sections of human tissue, allowing to reconstruct multiplex volumetric image data. 3D images have been generated by high-resolution optical sectioning of selected fields of view. The volumetric data can enable novel biological analysis regarding tissue anatomy and morphology. However, there are only a few tools (mostly proprietary) that can handle the size and multiplex structure of these datasets. Challenge participants will extend an open-source volume visualization tool developed by Google (Neuroglancer: https://github.com/google/neuroglancer) to handle multiplex volumetric image data in OME tiff format, typically used in digital histopathology.

10. 3D Volume Visualization through Neuroglancer
11. Automated Annotation of Endosomes Within 2D Electron Microscopy Image Montages
Electron microscopy provides ultrastructural detail at a resolution unparalleled by other imaging modalities. This level of detail on human metastatic breast cancer tissues has helped to identify potential therapeutically vulnerable structures, such as endocytic vesicles, at the nanoscale. These vesicles could be used to trick cancer cells into uptaking protein-conjugated therapy, and thus, creating a “self-destruct” mechanism of treatment. However, the number of endosomes with respect to the number of cancer cells needs a quantification method in order to begin testing this hypothesis and determine when a tumor is undergoing a sufficient level of endocytosis for the mechanism to be effective. While image annotation and segmentation for 3D electron microscopy has recently made significant gains, these models rely heavily on adjacent slice information for context, which is not available within 2D image montages. In this challenge, participants will develop deep learning methods to annotate endosomes within a high resolution 2D image montage collected via scanning electron microscopy.

12. Automated Artefact Removal in Multiplexed Fluorescence Images for Cosmetic Purposes
Image and sample artefacts are common in microscopy images. Although automatic detection helps to filter out the affected cells for quantification, visual appearance is still hampered and is most evident in viewing multiple channels in 3D. Artefacts will also hamper in qualitative analysis and are distracting when presented in front of an interdisciplinary audience who may not be used to interpreting them. This challenge aims to develop tools to automatically remove artefacts such as lint and fluorescent antibody blobs in a similar way that the Photoshop clone tool reduces cosmetic aberrations in photographs. NOTE: The resulting images are not expected to be used for quantitative analysis.
12. Automated Artefact Removal in Multiplexed Fluorescence Images for Cosmetic Purposes
Please rate your interest in each challenge *
Not interested
Somewhat interested
Very interested
Automated Detection of Microscopy Artifacts
Towards artefact-robust cell segmentation models
Virtual IF staining for 3D reconstruction and label-free virtual IF staining
Variational Autoencoder for single cell image feature extraction
Detect and correct spatial cross-talk
Visual Comparison of Single-Cell Clustering Results
Enabling Image Analysis for TB-scale data
Unsupervised thumbnail generation for whole-slide multiplexed microscopy images
End-to-End Image Processing and Analysis with Galaxy
3D Volume Visualization through Neuroglancer
Automated Annotation of Endosomes Within 2D Electron Microscopy Image Montages
Artefact Removal for Cosmetic Purposes
Question and comments
For questions and comments, please contact artem_sokolov@hms.harvard.edu and darren.tyson@vanderbilt.edu.
Submit
Clear form
Never submit passwords through Google Forms.
reCAPTCHA
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy