Notes from OpenVis Conference 2016

Thomas Preusse
Interactive Things
Published in
4 min readMay 10, 2016

--

The Charles River Basin, looking towards Longfellow Bridge

I had the pleasure to attend the OpenVis Conference. The conference took place at the IMAX theater at the New England Aquarium in Boston. Lost in the dark theater and mesmerized by the humongous screen, I got to fully indulge in data visualization. Let’s reflect on two frontiers in data visualization that were presented.

Seeing Computers Think

Fernanda Viégas and Martin Wattenberg (▶ 40") started the conference by opening up the black box of neural networks. A guided tour of playground.tensorflow.org revealed the beautiful process of neurons digesting wisely chosen features and learning from them in front of our eyes.

The potential and need for strong data visualization to understand machine learning became clear. And was further demonstrated with the TensorBoard and a WebGL Confusion Matrix of a classifier for the CIFAR-10 dataset. Visualizing might also be our best shot at understanding rubbish and adversarial input. For example slightly altered color values, barely visible to the human eye but convincing to the neural network, leading to diverging results.

A slight change of colors creates a gibbon out of a panda. Gibbon would be well advised not to rely on neural networks to keep the pandas out. Read «Explaining and Harnessing Adversarial Examples».

This reminded me of the face paint technique to avoid face detection, but much more subtle and fooling instead.

Kyle McDonald (▶ 34") immediately followed up with an artistic reflection on what computers think and learn. Showing tools and processes to reinterpreting emojis, transfer art styles, transcribing image feeds and reducing dimensions. Observing algorithms learn with the liberty of an artist provides an engaging and revealing perspective into what happens under the hood. A key instrument to discuss implications and possibilities with a greater public.

Let’s also keep in mind that while artificial neural network are inspired by our brain, they are not an accurate model of human learning. At the same time there are a lot of applications that can benefit from them — as also seen by Google’s large deployment.

Simulating the World

Nicky Case (▶ 31") asked how we can handle more and more information becoming available to us. Are we more informed, connected and empowered? Nicky’s unscientific survey of two anecdotes (they questioned themselves twice) clearly showed no. Data visualization can get us half way there by showing us patterns. For deep understanding, we need to go further by showing how the system works.

So how can we communicate systems visually? Nicky offered following options:

Watch or read Nicky’s full talk to learn more.

One could easily imagine a agent-based modeling simulation, where an agent is a neural network simulating a complex decision maker — a human being.

And Much More

You can watch all talks on openvisconf.com. Beyond the three talks mentioned above, I also highly recommend watching the following five:

You can also consult the community notes to get a complete overview in text.

--

--