Notes from OpenVis Conference 2016
I had the pleasure to attend the OpenVis Conference. The conference took place at the IMAX theater at the New England Aquarium in Boston. Lost in the dark theater and mesmerized by the humongous screen, I got to fully indulge in data visualization. Let’s reflect on two frontiers in data visualization that were presented.
Seeing Computers Think
Fernanda Viégas and Martin Wattenberg (▶ 40") started the conference by opening up the black box of neural networks. A guided tour of playground.tensorflow.org revealed the beautiful process of neurons digesting wisely chosen features and learning from them in front of our eyes.
The potential and need for strong data visualization to understand machine learning became clear. And was further demonstrated with the TensorBoard and a WebGL Confusion Matrix of a classifier for the CIFAR-10 dataset. Visualizing might also be our best shot at understanding rubbish and adversarial input. For example slightly altered color values, barely visible to the human eye but convincing to the neural network, leading to diverging results.
This reminded me of the face paint technique to avoid face detection, but much more subtle and fooling instead.
Kyle McDonald (▶ 34") immediately followed up with an artistic reflection on what computers think and learn. Showing tools and processes to reinterpreting emojis, transfer art styles, transcribing image feeds and reducing dimensions. Observing algorithms learn with the liberty of an artist provides an engaging and revealing perspective into what happens under the hood. A key instrument to discuss implications and possibilities with a greater public.
Let’s also keep in mind that while artificial neural network are inspired by our brain, they are not an accurate model of human learning. At the same time there are a lot of applications that can benefit from them — as also seen by Google’s large deployment.
Simulating the World
Nicky Case (▶ 31") asked how we can handle more and more information becoming available to us. Are we more informed, connected and empowered? Nicky’s unscientific survey of two anecdotes (they questioned themselves twice) clearly showed no. Data visualization can get us half way there by showing us patterns. For deep understanding, we need to go further by showing how the system works.
So how can we communicate systems visually? Nicky offered following options:
Watch or read Nicky’s full talk to learn more.
One could easily imagine a agent-based modeling simulation, where an agent is a neural network simulating a complex decision maker — a human being.
And Much More
You can watch all talks on openvisconf.com. Beyond the three talks mentioned above, I also highly recommend watching the following five:
- Reactive Building Blocks: Interactive Visualizations with Vega by Arvind Satyanarayan (▶ 32") — slides as PDF
- Designing for Realtime Spacecraft Operations by Rachel Binx (▶ 32") — slides on Speaker Deck
- Enhancing your maps and visualizations with WebGL GLSL Shaders by Patricio Gonzalez Vivo (▶ 34") — interactive slides
- 39 studies about human perception in 30 minutes by Kennedy Elliott (▶ 23") — read her speaker notes
- Everything is Seasonal by Zan Armstrong (▶ 29") — interactive slides
You can also consult the community notes to get a complete overview in text.