Data Visualisation by the Principles

This is an one-hour, action-packed talk on principles of data visualisation. Slides available here.

Delivering good visual design requires putting on a stack of thinking hats. Clear message and intent, concise delivery for low cognitive load, progress reveal with a strong focus, and considerations for aesthetics, conventions, and cultural norms, just to name a few of those hats. Data visualisation is about data-driven narrative design and storytelling.

However, there are many pitfalls and mistakes that are often obscure, unnoticed, or worse, has misled your audience to a wrong direction. This talk aims to lay the first principles of data visualisation, good foundations that is widely applicable, and point out the pitfalls and how to avoid them. 

Programming is creative. Creativity is a skill. And skills can be learned.

Elm Town Podcast on Generative Art

Following my talk on generative art at YOW! Lambda Jam, I had a super fun conversation with Kevin Yank on my work process, design process, inspiration, and most importantly, technical insights using the programming language, Elm

The podcast episode is published at Elm Town, episode 34. You can listen to the episode above, or go to the podcast page and subscribe with one of your podcaster apps.

If you are interested in my presentation recording, video is up on YouTube.

Penplotting with the Axidraw V3

Penplotting with the Axidraw V3

I purchased a penplotter called AxiDraw V3, which is a fine-precision 2D plotter that can be used with any implement at 90˚ and 45˚ configurations. My plotter is refurbished with free ground shipping, and delivered by Auspost Shopmate mail redirection service. It works like brand new.

I am working on a RESTful API for the plotter, using their private beta Python CLI which has a permissive open source license. Watch this space.

Visible Science

In November 2017, I was pleased to speak at the Monash SensiLab Forum about my data visualisation work. It was livestreamed at a new venue, and we had double the online viewers than the physical attendance.

This presentation referenced my UNSW talk, Design as Invitation to Interaction. If you are new to interaction design, and work in or related to education, I urge you to check it out first for background knowledge.

For Visible Science, I walked through my design process, from sketch and conceptualisation to design and implementation. There is a strong emphasis that programming is creative work:

Creativity is not only about visual design. It’s applied problem solving. It’s creating something interesting and novel. It’s a little bit crazy.

Being creative is a skill. And skills can be learned, improved, and practiced. You can apply any of the data visualisation guidelines to another field, and find it useful.

Some tips on asking for feedback as a designer:

  • Rotate them: Ask your direct client, your direct colleagues, your friends and family, and your users. Work on a little bit and round robin them, so by the time it’s back to your client, it’s 3 (at least more than 1) revisions since.
  • Keep up the momentum: Projects are easier when they follow some inertia. If people see progress on a regular basis, they will be more prepared for new things when you have them.
  • Be a listener: When you’re asking for feedback, you are after what people think, not to tell what people you think. Be open to both sides of critiques, and make mental notes when you are asked to clarify things - it means they are not apparent to others.

Feel free to reach out to me on Twitter or comment below.

Mixed Reality and Scientific Visualisation

Mixed Reality and Scientific Visualisation

This post is adapted from a talk I gave at the C3DIS conference in July 2017. 

In Jason E. Powell's work, "Looking Into The Past", he captured over 200 old photographs overlayed on roughly the perspective where they were taken. His work took the internet by storm in 2013. This post was partly inspired by his work.

When interviewed by BBC, Powell explained that he was doing a project to rephotograph some of the shots from the Library of Congress. He held up the photo to line up the angle, and a moment of spark came to his mind—and took the shot holding the print. For particular work, he wrote about how he felt about the now-missing 1950 Leader Theatre:

This photograph breaks my heart.

Washington, DC used to have so much character. But we don't have anything like the set of buildings in this original photograph anymore. And, what's more, we never will again. And I missed it! This project is the only way I can ever see the way Washington, DC used to look, back when you could build four incredible buildings like these next to each other.

It makes me very sad. I want to step into this photograph and go join those folks in line. I want to eat at the Acropolis Cafe. I want to visit the Gayety. And yet, the only thing left from this scene is the red brick building on the left. You cannot convince me that Washington, DC is better off by having the modern blahbuilding taking up this space instead.

Through the lens of the camera, the streets were rephotographed for a historical context. I found this notion of rephotograph fascinating. Powell's work brought the past to the present, re-encoded physical space. 

Powell's work is, in my broad definition, remixing reality. This can be extended from past->present, to present->present, to future->present. Essentially, time is irrelevant, and what glues the two spaces together is the thriving narrative that makes meaning.

In this post I would like to examine mixed reality in a broad sense. I am intentionally blurring the boundary of what makes up mixed reality in order to piece together a bigger picture. More generally, I would like to discuss the nagging feeling when we first get into an alternative reality that feels unnatural to all of us.

From a typical marketing image of Teomirn, an application that teaches people to play the piano, we can dissect the atypical emotions provoked within. For instance, kids I know would probably be jumping onto the piano and trying to play along, disrupting the experience in the process. This photo of a family is just a tad unemotional.

Cognitive dissonance happens when we encounter the unexpected. We take some time to adjust, and recalibrate our expectations to match the experience. In Powell's work, overlaying the past to the present creates a cognitive dissonance, and provokes curiosity, and understanding. For Teomirn, the virtual player is overlayed, but the affect is not quite the same. 

Mixed reality applications are assistive by nature. Take Google Lens for example, it marries information retrieval (reverse image search on Google Maps) with the smartphone camera and display:

For the longest time, humans have learned how to create tools as an extension of themselves. It's how we took over the planet. When we think of the term "assistive technologies", we tend to associate it with aiding disability. Perhaps in the distant, utopian future, assistive technologies would become necessary enhancements. 

But would mixed reality replace reality? I previously wrote about the obstacles that are inherent with virtual reality, and I think the question is missing the most important point. We can't escape our reality. We're vendor-locked, constantly logged in, and are born into this environment. Humans are habitual creatures, and we might prefer the default reality after all.

Rather, I believe mixed reality should supplement reality. Just as other technologies—mobile browsers, for instance—have made our lives a little bit easier, they are not stand-in replacements.

The strengths of mixed reality are to connect our body sensory inputs and outputs, and in some cases enhance them with other tools, like machine learning. Mixed reality by itself is only a platform, an empty shell ready to be worn, a coat where buttons have logical rules, but not hard constraints. 

Mixed reality platforms come in many flavours. Here are some examples that are representative and popular, and they are by no means exhaustive. The dance below was by Marpi, with Hien Hyunh dancing:

In their performance, Marpi connects Hien's hand movements with two virtual geometries projected behind. The projected lights create an additional energy, and without it, the performance is not the same. Marpi and Hien's exaggerate what is perceived from the dance.

Another work by Hanley Weng marries Apple's CoreML, a machine learning library, with ARKit, an augmented reality framework, on the iPhone:

Weng feeds the camera to a machine that attempts to identify what it is. Upon recognition, the name is labelled in proximity of the object. The potential applications are high and broad, and very exciting.

In Fragments, a game made for the Microsoft HoloLens, the player acts as a detective trying to figure out where the murder scene has taken place. In the animation, we see the player inspecting a transit schedule on their desk. Their office has been transformed into a crime scene, reappropriated for the context at play.

Applying the concept of reappropriating a space to scientific visualisation, mixed reality platforms become immensely meaningful as a communication tool.

I bend the purpose of scientific visualisation from Wikipedia for my use: scientific visualisation aims to graphically illustrate scientific data to enable scientists, and non-scientists, to better understand and glean insight from their data.

In other words, mixed reality platforms can act as a knowledge translation, a bridge to reach the scientific insight that might be normally inaccessible. However, the change in frame of mind, perspectives, is difficult to measure. We still have yet to see what specific values mixed reality will bring to the table, but we're at the right time to explore it.

The above animation is a mockup I produced at the end of 2016. The map shows the Victoria-Tasmania domain in Australia, and the smoke is a crude signal of what happened in a major January 2016 Tasmanian bush fire. The smoke went all the way to Victoria over 10 days. Emergency calls were recorded to complain about the smoky air.

Fast-forwarding 6 months later, working one day a week I have gotten a bunch of integrations with historical data: wind vectors, smoke magnitudes across 16 elevations, satellite imagery, ground observations from weather stations, as well as playback controls. Thanks to the scientist Martin Cope who provided me with the data. 

A version not too dissimilar to the above prototype was brought to International Biomass Burning Initiative Workshop, Boulder, Colorado. People who tried the demo loved where it is going, and I was happy to hear their responses. It's not more than a self-pat on the back exercise, of course, but it was certainly encouraging for a solo developer working one day a week on this.

What I discovered through my personal experience was that mixed reality provides insight with mobility. It's one thing to display virtual content on a monitor, but nothing comes more natural for anyone than to walk around the model and check it out from up close, side ways, and at an angle. Mixed reality not only re-encodes our spaces, but also provides an avenue of discovery that befits the purpose of scientific visualisation.

Let's talk about some of the unresolved challenges in mixed reality. It is important to recognise that we are still in early stages of what's coming, and identify the gaps.

  1. High-end mixed reality still requires a prohibited cost of $3,000 or above. A set of Microsoft HoloLens will easily set you back that much, and if you were to invest in a tethered headset, you still need a powerful enough computer to run it. Oculus has announced a standalone headset for $200 in 2018, and I am confident that others will follow suit.
     
  2. Hardware design inherently constraints mixed reality platforms. One of the things new developers overlook is the chipset and memory limitations of a standalone—or, shall we say, console—headset development. For instance, the Microsoft HoloLens only has 640MB of VRAM, unlike most standard GPU cards which come with 3-6 GB VRAM today. Just as point 1), I expect the limitation will slowly be lifted to match laptop specs in 3-5 years.
     
  3. Mixed reality interfaces have different affordances, and their own limitations. A lot of companies are hung up on haptics feedback, putting invisible forces where a virtual object would not be able to exert. They are missing the actual problem: there is not a physical object there in the first place. Instead, we saw the rise of virtual reality like the HTC Vive, who put controllers in our hands. Mixed reality will need to go beyond visual overlays and blend into our physical world with sensors rather than actuators, which are far more flexible in use cases.

In summary, please allow me to indulge in another Jason E. Powell's rephotograph, this time in Wyoming.

Powell pulled punches on the drastic change over 150 years of landscape. He wrote under this photograph:

None of the trees were there in 1865, most of the gravestones seemed to still be there, although the graveyard was overrun, there’s a house that isn’t there any longer, but you can see the hint of the church steeple poking over the house in the original shot. That’s still there. The canal is completely dry and there’s a tunnel through the mountain. The bridge doesn’t exist anymore, either. All in all, I’d say there isn’t much left from 1865 Harper’s Ferry, honestly.

None of the mixed reality technology sophisticated as the ones we have today existed back in the 60's, when virtual reality research was beginning in its infancy. There are no more bulky trackers any longer, replaced by smaller and smaller components that are becoming more mass produced.

Perhaps 50 years from now, what will become the standard mixed reality platform will be so alien to us 50 years ago, without a trace. We see them only through remixing the past.

Advice for first-time hackathon attendees

Advice for first-time hackathon attendees

The following points are distilled from my experience in attending hackathons so far. I hope this will be useful to those who are new to the format.

  • What are some real problems you’d like to solve? For example, maybe transportation isn’t as good in some parts. Or maybe the roads are not as predictable.
  • Who are your target community? Assuming you’re making software projects, there are people who you should be keeping in mind when designing.
  • Context? When and where do you see people using this?
  • Expertise? What kind of domain knowledge do you need? Can you get buy-ins from potential stakeholders? What about technical background?
  • Brainstorm wide and as much as you can. Many teams will arrive at the same idea initially, but with enough iterations and external input, you’ll come up with unique needs and solutions.
  • Focus on one function and implement it really well. The rest can be placeholders, and doesn’t have to work with real world data. The important part here is to demonstrate the potential, and polishing takes a long time. So focus on one function.

Reality Mix-up: Science Applications

Reality Mix-up: Science Applications

This talk was originally given at Melbourne Augmented Reality Meetup, hosted by CSIRO Data61. I wrote up the transcript while my mind was fresh.

If you haven't been to Melbourne Museum lately, check out the CSIRAC, Australia's first computer in 1949. It was a significant investment in service over 15 years, used punch cards, and it could compute 1000 times the number of equations a single person could do in one day.

Punch cards were data, but easily replaceable: someone could (with patience) spot a mistake, replace a card, and feed it back into a machine. Once the data was verified and correct, it could be replicated and posted to other people, and they could run similar calculations.

Computer terminals was the next leap. You could type on a physical keyboard, and the computer would give you a response. This kind of instantaneous interactivity changed the way scientists work. What wasn't possible before was suddenly something that could be done in a much shorter period of time.

caves of qud

Computer scientists and programmers weren't the only people who embraced computers. Storytellers, filmmakers, and musicians alike went to learn programming, or collaborated with programmers. They created interactive experiences through digital media, and broke new grounds.

Today we live in a world where photorealistic experiences are coming at interactive rates. Computing power is continuing to rise, more affordable, and more efficient. It's not just automation and interactions, but also digitisation of processes and workflows.

Today we live in a world where motion capture technology from 20 years ago, which cost $100,000+, can be bought for less than $2,000 at home. Authoring and content creation tools like Unity are maturing, and there are many efforts in open source and commercial solutions building on each other. 

Improvisation became innovation. ReMoTe is such a synthesis where the remote worker's location is streamed back to an instructor, who can project stereoscopic overlays, like their hands through Kinect, back to the worker on site. It was cheaper to collaborate remotely than flying someone over. It was like a Skype call, but in 3D, and in front of you.

image 8.jpg

Zebedee is a hand-held LIDAR scanner (you'd still lug a trolley behind you) that can digitise an entire location by simply walking around it. The laser spinner continues to record the point cloud, and the spring head gives it plenty of resolution and reach.

I want to explore what mixed reality can bring to the table for scientists and researchers. The work above was done by Eleanor McMurtry, one of my students last summer. Given the data, we can quickly tell where is the front door, the halls, and leave each other a note. 3D data is much more intuitive than floor plans, and don't rely on existing knowledge of the place.

image (1).png

My colleagues Matt Adcock and Stuart Anderson set this insect photo station up at Data61. They mounted it on a spinning disc, and automated the whole picturing process. The result was 100+ photos for a single insect used to reconstruct it in 3D, preserving the fragile sample for future use.

The result was that many insects can now be viewed up close, in the angle natural to the interested scientist. They no longer have to be careful about using the sample. Better yet—the process of measuring the length of the limbs and sizes of the hulks could also be automated, saving even more time down the track.

CluckAR, developed by Choice Australia, is a consumer-facing mobile application. It works for all Australian egg providers found at supermarkets, and it will show you how free range the chickens really are. 

We are still exploring what mixed reality can bring for research, and we've learned a few lessons along the way. I wanted to share with you some of the lessons we learned, and tell you a bit more about where things are headed, especially for mixed reality on the web.

Mixed reality is a visualisation platform. We can bring existing workflows from the entertainment industry to data science and visual analytics. Real-time exploration of historic and new context is here, and the tools are becoming more accessible for modern developers and non-developers alike. For scientific visualisation, the added bonus is all the existing tools we can bring in to our research.

Mixed reality encourages physical exploration. There is something curious about being able to get up and close to your dataset, and look at it in ways you are naturally inclined to look. It is sometimes difficult to get the angle you want with a mouse and keyboard combo, and nothing is more natural than your own body movement and your own eyes.

Mixed reality will be delivered across the web. Just as movies and music already moved to streaming platforms, digital content will also benefit from moving to streaming content. For one, you no longer have to wait for a massive download to get things going. Your browser will download the things you can see first, and fill in objects as they become available.

aftertheflood-O3.gif

There are much more to see where technology is headed. It is not likely that in 5 years we will all have contact lenses that deliver mixed reality magic. But the devices are maturing, tools are shared and built upon, and web services are increasingly offered for others to create on. Right now, web technologies like After The Flood are good enough to deliver immersive experiences.

We have the people just as good, if not better than average, regardless of the obstacles we face in government, internet speeds, and hardware. No matter where you live, as long as you have access to tools to develop on mixed reality technology, you have the opportunity to shape the industry.

Device manufacturers are still pushing hard, at least in my experience, for early adopting developers. Mixed reality will not replace our existing reality, but it will serve as an enhancement for our lives, like Google Translate, with real-time text replacements. There's no app yet that we cannot live without, but many are quite handy in the right context.

I hope I have given you some glimpses of what we have worked on, where mixed reality is going in terms of existing technologies, and what may be coming in the future.

CityLab Melbourne Design Sprint

I was invited to be part of CityLab's design sprint in October 2016 to brainstorm and prototype a community engagement software solution with a group of 10.

Bringing together City of Melbourne employees with experts from creative agencies and the technology sector, we're moving at pace—from concept to prototype in 3.5 days—to test and trial new ideas and city services for the community.

Interaction Design for Education Designers

Interaction Design for Education Designers

Listen to the talk or download the MP3:

You can apply design principles in your work, too.

I gave a talk at the Learning Analytics and Education Data Seminar, University of New South Wales. Half or more people in the audience were educators and instructional designers in academia. While this talk was structured for them, the principles are applicable for other fields.

In this talk, "Design as Invitation to Interaction", I presented three barriers to designed objects, and used case studies as well as examples of good and bad design to show how they can be over come. They are:

  1. Mistimed, misplaced, misused
  2. It's dangerous to go alone!
  3. Technology is creepy

I referenced several video productions and documents I was involved during the talk. You can follow along:

You can find a copy of my slides (PDF), and slides below.

Moving Towards D3 v4

D3.js is a popular data visualisation framework for Javascript and the web. In July 2016, Mike Bostock released the new version, v4. To celebrate the new release, I gave this talk at the CSIRO data science webinar. The content covers: what is D3? Why do people use data visualisation? What is it for? I show examples of recent works, and point to a couple of resources. Then, D3's enter and exit pattern is briefly outlined. I also talk about the modularisation of D3 v4, as well as a couple of libraries that go with it.

Smoke and Fire

Smoke and Fire

Smoke and Fire (live website) is an interactive visualisation of air pollution in Australia, particularly shipping plume along the east coast of New South Wales. The aim was to present the data to experts as well as the general public, communicating issues around air pollution and air quality monitoring. 

I developed WebGL-based visualisation along with a set of Python scripts to convert the pre-processed satellite data into a web-friendly format. 

To illustrate air pollution issues, we took a vector-based representation of the globe, overlaid with opaque colours representing different types of pollution sources.

The brighter the colours are, the denser fine particles filling that region. Blue represents PM2.5 particle, or particles at 2.5 micrometers diameters. Yellow represents shipping plumes and red spots are strong emission sources where a bushfire has occurred, or a significant amount of smoke has been registered. 

The visualisation was exhibited at the Clean Air Society of Australia and New Zealand (CASANZ) conference in September, 2015.

Smoke and Fire made it into Kantar Information is Beautiful Awards longlist.

That's me in the middle.

That's me in the middle.

Source code is open source on Github here.

I also gave a talk on the development and design process of Smoke and Fire at OzViz 2015. You can see the entire talk with presenter notes here.

In the future we hope to update it as new data become available.

Wordscapes

Wordscapes

Above: "Laughter", based on a community translation of Henry Bergson's 1900 essay, "Le Rire".

This year, Phil Gough and I co-authored digitally generated artworks based on public domain books archived by Project Gutenberg. We are proud to say that our work is being featured by the SIGGRAPH Digital Arts Community

Below: "Knowledge", based on Alexander Phillips' 1915 essays published in "Towards a Theory of Knowledge."

Next: "Morals" (unpublished), based on a 1912 reprint edition of David Hume's 1777 work, "An Enquiry Concerning the Principles of Morals".

Web Directions 2014

Web Directions 2014

This year, the awespiring, thought provoking and insightful conference Web Directions is held at Seymour Centre near the University of Sydney, where I am based. We have worked closely with the conference organisers John and Maxine to record the event as complete as possible. On Thursday, my camera left a trail of visual memories encoded in these pixels.

You can see the program schedule here. Talks are (mostly) recorded, and will be progressively made available online starting mid November.

"Claws": 3D Printed Character Tracks for Betrayal at the House on the Hill

"Claws": 3D Printed Character Tracks for Betrayal at the House on the Hill

If you ever played the thrilling, semi-cooperative board game Betrayal at the House on the Hill, you might recall the plastic clips that came with the game were useless. The plastic clips don't clip. They just fall off or shift around when you bump the table.

Out of dissatisfaction, I've designed a new character tracker—appropriately titled "Claws"—free and open for anyone to download to manufacture. 

Download the .STL files for 3D printing here:

  1. The Base
  2. The Dial

If you use Rhinoceros 3D, you can also download the original design and modify to your heart's content. The files are in centimetres, so if your program takes millimetres, upscale by 10x.

All files provided here are under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Thanks to the Board Game Design community for the support and feedback.

Design Highlights

The Base will fit in the box that came with the game:

The Dials are clear and easy to use:

Design of the board is simple, allowing plenty of room for creativity:

I'm always keen to hear or talk about ideas. If you do download the file and print them, please let me know. If you decide to modify for this, I would be very happy to hear about your creations.

Game on.

Updates

Reed Arnold has kindly put together 1 file with a base and 5 dials. If you don't mind printing everything in the same colour, this will work. Please note this file is in millimetres.

Juan Garcia created a windowed version, which has its own set of aesthetics, on Thingiverse.