-
Scalable Insets
Pattern-Driven Navigation in 2D Multiscale Visualizations
Scalable Insets is a new technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces such as gigapixel images, matrices, or maps. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the patterns.
Website: http://scalable-insets.lekschas.de/
Source code: https://github.com/flekschas/higlass-scalable-insets -
DXR
An Immersive Visualization Toolkit
DXR is a Unity package for rapid prototyping of immersive data visualizations in augmented, mixed, and virtual reality (AR, MR, VR) or XR for short.
Website: https://sites.google.com/view/dxr-vis
Source code: https://github.com/ronellsicat/DxR -
LSTMVis
Visual Analysis for Recurrent Neural Networks
LSTMVis is a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain.
Website: http://lstm.seas.harvard.edu
Source code: https://github.com/HendrikStrobelt/LSTMVis -
Caleydo
Visualization for Molecular Biology
Caleydo is an open source visual analysis framework targeted at biomolecular data. The biggest strength of Caleydo is the visualization of interdependencies between multiple datasets. Caleydo can load tabular data and groupings/clusterings. You can explore relationships between multiple groupings, between different datasets and see how your data maps onto pathways. Caleydo has been successfully used to analyze mRNA, miRNA, methylation, copy number variation, mutation status and clinical data as well as other dataset types.
Website: http://www.caleydo.org
Source code: https://github.com/Caleydo -
NPML
Visually Interactive Neural Probabilistic Models of Language
This four-year project will employ a collaborative design process between researchers in visualization and machine learning. We aim to create neural architectures designed from the ground-up for visual interactivity allowing examination and correction. We will do this by designing neural probabilistic models that expose explicit “hooks” in the form of discrete latent variables determining model choices. We will apply these models to core tasks in natural language processing (NLP) including machine translation, summarization, and data-to-text generation, and design hooks that target important domain sub-decisions such as the showing the current topic cluster during text generation or the selected document sub-section during summarization.
Website: https://npml.github.io/
-
Alex's Resources
A list of many code bases and datasets (graph bundling, software visualization tools, image processing, 3D shape processing, dimensionality reduction, and more) maintained separately by Alex Telea.
Website: https://webspace.science.uu.nl/~telea001/Research
Source code: https://webspace.science.uu.nl/~telea001/Software