Last week I was at IEEE VIS conference in Paris. VIS consists of 3 conferences running in parallel : InfoVis (Information Visualization), VAST (Visual Analytics Science and Technology) and SciVis (Scientific Visualization).
Overall the sessions can be broken down into 3 broad categories, new tools/visualizations, cognitive research on understanding of visualizations and potential improvements for exploratory data analysis, and applications.
I think the highlight of the conference was data mining. A large chunk of papers used various classification methods to create meaningful abstractions/generalizations over visualizations to improve their readability and increasing time to insight. It was also interesting to see how widely Amazon Mechanical Turk was being used by researchers.
Below are some of the talks/papers I found interesting.
Visualization tools and approaches
Inset : Visualization of intersecting sets
Probably one of the highlights for everyone who attended the conference
Onset : A visualization technique for large-scale binary set data
Probably another highlight for everyone who attended the conference
DimpVis: Exploring Time-varying Information Visualizations by Direct Manipulation
Like pages shelf in Tableau but via interaction with marks. Works well with scatter plot but not so much with other viz types.
Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets
Linked axis charts, Sankey diagrams… to highlight subset relationships between different data sources
Stenomaps: A new visual encoding for thematic maps
The stenomap comprises a series of smoothly curving linear glyphs that each represent both the boundary and the area of a polygon.
Visualizing Statistical Mix Effects and Simpson’s Paradox
Simpson’s paradox can cause standard charts to give misleading impressions about nature of changes. This paper introduces the comet chart which shows magnitude of change (e.g. unemployment) in one axis and change in group size in the other (e.g. highest degree earned) with a mark each for two points in time being compared connected via a tapered line that resembles a comet.
Visual Adjacency Lists for Dynamic Graphs
A matrix like layout alternative to graph visualization to highlight adjacency relationships
IvisDesigner – Expressive Interactive Design of Information Visualizations
Revisiting Bertin matrices: New Interactions for Crafting Tabular Visualizations
Web-based version of Bertin’s reorderable matrix, following the black-and-white design of the original. Better to play with it than describe.
Progressive Visual Analytics (incremental results from long running tasks)
This was an interesting topic. There were two papers. The first one focused on cataloging existing algorithms in various languages e.g. R, Python that can already be used in this fashion by the visual analytics tool to make repeated calls to the algorithm in addition to providing guidelines for feature algorithm development to support updates during progress e.g. at the end of each iteration as it converges so user can terminate the long running task, change its parameters.
Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations
Second one showed a working system that does this. They rewrote their algorithm (Sequential PAttern Mining) converting from depth first to breadth first search and allowed users to see in progress results and steer the algorithm by pruning patterns that are not interesting.
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics
High Dimensional Data
Unsurprisingly this group involved heavy reliance on glyphs
INFUSE: Interactive Feature Selection for Predictive Modelling of High Dimensional Data
Great but with 500 glyphs in front of you at once as presented in the conference, not very practical since you can hardly recognize the shapes.
The Influence of Contour on Similarity Perception of Star Glyphs
They tested the effect of outlines in star glyphs, and found that the glyph works better without it, just showing the spokes which is surprising. But similarity is an ambiguous term. Researchers were looking for data similarity using a visual encoding that’s good for shape similarity and I doubt that subjects understood what they were supposed to evaluate. More on this later in the blog post…
Multivariate Network Exploration and Presentation: From Detail to Overview via Selections and Aggregations
System shows multivariate graphs and allows the concurrent display of the network and the multivariate data in the nodes. They allow the user to make selections to aggregate the graph to see the higher-level structure.
GraphDiaries: Animated Transitions and Temporal Navigation for Dynamic Networks
Imagine pages shelf in Tableau but for networks
How to Display Group Information on Node-Link Diagrams: an Evaluation
Different visual groupings to improve network readability
Perception & Exploratory Analysis
Perceptual Kernels for Visualization Design
Create similarity matrices for different mark shapes, colors and their combinations based on user study and use this information to automatically pick mark types/colors for a viz to show similar items using similar marks and distinct items with visually distinctive marks.
Ranking Visualization of Correlation Using Weber’s Law
Weber’s law states that the just-noticeable difference between two stimuli is proportional to the magnitude of the stimuli. Authors noticed the model also fits well when just-noticeable difference is existence of correlation between two variables. And they used it to evaluate different chart types from scatter plot to parallel coordinates to donut chart and how good they are to visualize correlation in different scenarios e.g. to find out parallel coordinates are great as long as there is negative correlation etc.
Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error
Interesting user study on how people perceive error bars, and alternative encodings to reduce this confusion. Talk focused on error bars on bar charts (which I agree can be somewhat misleading to some people) but then generalized the argument to other charts. Alternative encodings proposed were 1) a violin chart variant that shows normal distribution on either side with lines covering full vertical extent of the chart 2) using transparency. Both approaches have their drawbacks, especially with transparency it is not possible to tell how far the bars extend. If you don’t have 20-20 vision, you get very narrow error bars :) It was funny to see all the other presenters in the same session following this talk using error bars when sharing the results of their studies.
A Principled Way of Assessing Visualization Literacy
As I said earlier, Amazon Mechanical Turk was probably used for testing every research study presented. It was interesting to hear the last talk of the day about perception and design asked the question “how do you know the people you’re testing with are suitable for these tasks?” This paper tries to come up with a standardized test to assess visualization literacy.
To many probably the most interesting thing was something not even related to information visualization. AutoDesk was one of the main sponsors and among many other things they talked about the fact that they were able to design a virus, 3D print using a DNA printer, and boot it up. Oh.. and that they grew Van Gogh’s missing ear in the lab🙂, how they are printing self-assembling objects.
If you stumbled upon this blog by accident and have no interest in data visualization but like travelling, I have something for you, too. If you fly IcelandAir through Rejkjavik, try to take advantage of their free stopover offer. You can spend up to a week at no extra charge and Blue Lagoon is about 15 miles from the airport.