Super Blue Blood Moon

Once in a lifetime opportunity. 200 ISO, 150mm, f/3.2, 1.6 sec.
Once in a lifetime opportunity. 200 ISO, 150mm, f/3.2, 1.6 sec.
Tags: Photography, Moon
In November 2017, I was pleased to speak at the Monash SensiLab Forum about my data visualisation work. It was livestreamed at a new venue, and we had double the online viewers than the physical attendance.
This presentation referenced my UNSW talk, Design as Invitation to Interaction. If you are new to interaction design, and work in or related to education, I urge you to check it out first for background knowledge.
For Visible Science, I walked through my design process, from sketch and conceptualisation to design and implementation. There is a strong emphasis that programming is creative work:
Creativity is not only about visual design. It’s applied problem solving. It’s creating something interesting and novel. It’s a little bit crazy.
Being creative is a skill. And skills can be learned, improved, and practiced. You can apply any of the data visualisation guidelines to another field, and find it useful.
Some tips on asking for feedback as a designer:
Feel free to reach out to me on Twitter or comment below.
Tags: Talks, Presentation, Research, Interaction Design
I had the privilege to host and moderate the digital privacy panel at Henley Club with Ivan Chua. Joining us on the panel were four privacy-conscious researchers, experts, and consultants: Dr Vanessa Teague, Ellen Broad, Gabor Szathmari, and Prof Dali Kaafar.
Many threads of digital privacy were had. I took away some key points as below:
Got tips on digital privacy? Feel free to reach out.
This was my first PyCon AU, and also my first industry talk given to a room over 100 people. It was less nerve-wrecking than it sounds like. :)
Talk material and source code is presented with Jupyter Notebook in Python. You can get a copy of the code here.
I also published a written version of this talk on opensource.com, if you prefer an article over video.
Tags: Presentation, Python, Data
Empathy over apathy.
Diversity over conformity.
Calling out instead of staying silent.
Lifting up instead of putting down.
Tags: Empathy, Diversity, Encouragement
This post is adapted from a talk I gave at the C3DIS conference in July 2017.
In Jason E. Powell's work, "Looking Into The Past", he captured over 200 old photographs overlayed on roughly the perspective where they were taken. His work took the internet by storm in 2013. This post was partly inspired by his work.
When interviewed by BBC, Powell explained that he was doing a project to rephotograph some of the shots from the Library of Congress. He held up the photo to line up the angle, and a moment of spark came to his mind—and took the shot holding the print. For particular work, he wrote about how he felt about the now-missing 1950 Leader Theatre:
This photograph breaks my heart.
Washington, DC used to have so much character. But we don't have anything like the set of buildings in this original photograph anymore. And, what's more, we never will again. And I missed it! This project is the only way I can ever see the way Washington, DC used to look, back when you could build four incredible buildings like these next to each other.
It makes me very sad. I want to step into this photograph and go join those folks in line. I want to eat at the Acropolis Cafe. I want to visit the Gayety. And yet, the only thing left from this scene is the red brick building on the left. You cannot convince me that Washington, DC is better off by having the modern blahbuilding taking up this space instead.
Through the lens of the camera, the streets were rephotographed for a historical context. I found this notion of rephotograph fascinating. Powell's work brought the past to the present, re-encoded physical space.
Powell's work is, in my broad definition, remixing reality. This can be extended from past->present, to present->present, to future->present. Essentially, time is irrelevant, and what glues the two spaces together is the thriving narrative that makes meaning.
In this post I would like to examine mixed reality in a broad sense. I am intentionally blurring the boundary of what makes up mixed reality in order to piece together a bigger picture. More generally, I would like to discuss the nagging feeling when we first get into an alternative reality that feels unnatural to all of us.
From a typical marketing image of Teomirn, an application that teaches people to play the piano, we can dissect the atypical emotions provoked within. For instance, kids I know would probably be jumping onto the piano and trying to play along, disrupting the experience in the process. This photo of a family is just a tad unemotional.
Cognitive dissonance happens when we encounter the unexpected. We take some time to adjust, and recalibrate our expectations to match the experience. In Powell's work, overlaying the past to the present creates a cognitive dissonance, and provokes curiosity, and understanding. For Teomirn, the virtual player is overlayed, but the affect is not quite the same.
Mixed reality applications are assistive by nature. Take Google Lens for example, it marries information retrieval (reverse image search on Google Maps) with the smartphone camera and display:
For the longest time, humans have learned how to create tools as an extension of themselves. It's how we took over the planet. When we think of the term "assistive technologies", we tend to associate it with aiding disability. Perhaps in the distant, utopian future, assistive technologies would become necessary enhancements.
But would mixed reality replace reality? I previously wrote about the obstacles that are inherent with virtual reality, and I think the question is missing the most important point. We can't escape our reality. We're vendor-locked, constantly logged in, and are born into this environment. Humans are habitual creatures, and we might prefer the default reality after all.
Rather, I believe mixed reality should supplement reality. Just as other technologies—mobile browsers, for instance—have made our lives a little bit easier, they are not stand-in replacements.
The strengths of mixed reality are to connect our body sensory inputs and outputs, and in some cases enhance them with other tools, like machine learning. Mixed reality by itself is only a platform, an empty shell ready to be worn, a coat where buttons have logical rules, but not hard constraints.
Mixed reality platforms come in many flavours. Here are some examples that are representative and popular, and they are by no means exhaustive. The dance below was by Marpi, with Hien Hyunh dancing:
In their performance, Marpi connects Hien's hand movements with two virtual geometries projected behind. The projected lights create an additional energy, and without it, the performance is not the same. Marpi and Hien's exaggerate what is perceived from the dance.
Another work by Hanley Weng marries Apple's CoreML, a machine learning library, with ARKit, an augmented reality framework, on the iPhone:
Weng feeds the camera to a machine that attempts to identify what it is. Upon recognition, the name is labelled in proximity of the object. The potential applications are high and broad, and very exciting.
In Fragments, a game made for the Microsoft HoloLens, the player acts as a detective trying to figure out where the murder scene has taken place. In the animation, we see the player inspecting a transit schedule on their desk. Their office has been transformed into a crime scene, reappropriated for the context at play.
Applying the concept of reappropriating a space to scientific visualisation, mixed reality platforms become immensely meaningful as a communication tool.
I bend the purpose of scientific visualisation from Wikipedia for my use: scientific visualisation aims to graphically illustrate scientific data to enable scientists, and non-scientists, to better understand and glean insight from their data.
In other words, mixed reality platforms can act as a knowledge translation, a bridge to reach the scientific insight that might be normally inaccessible. However, the change in frame of mind, perspectives, is difficult to measure. We still have yet to see what specific values mixed reality will bring to the table, but we're at the right time to explore it.
The above animation is a mockup I produced at the end of 2016. The map shows the Victoria-Tasmania domain in Australia, and the smoke is a crude signal of what happened in a major January 2016 Tasmanian bush fire. The smoke went all the way to Victoria over 10 days. Emergency calls were recorded to complain about the smoky air.
Fast-forwarding 6 months later, working one day a week I have gotten a bunch of integrations with historical data: wind vectors, smoke magnitudes across 16 elevations, satellite imagery, ground observations from weather stations, as well as playback controls. Thanks to the scientist Martin Cope who provided me with the data.
A version not too dissimilar to the above prototype was brought to International Biomass Burning Initiative Workshop, Boulder, Colorado. People who tried the demo loved where it is going, and I was happy to hear their responses. It's not more than a self-pat on the back exercise, of course, but it was certainly encouraging for a solo developer working one day a week on this.
I got to check out the coolest VR thanks to Martin Cope of #CSIRO. Satellite and #airquality data from #fires in Tasmania! #DayofScience pic.twitter.com/79bfHpebWo
— Christine Wiedinmyer (@cwiedinm) July 13, 2017
What I discovered through my personal experience was that mixed reality provides insight with mobility. It's one thing to display virtual content on a monitor, but nothing comes more natural for anyone than to walk around the model and check it out from up close, side ways, and at an angle. Mixed reality not only re-encodes our spaces, but also provides an avenue of discovery that befits the purpose of scientific visualisation.
Let's talk about some of the unresolved challenges in mixed reality. It is important to recognise that we are still in early stages of what's coming, and identify the gaps.
In summary, please allow me to indulge in another Jason E. Powell's rephotograph, this time in Wyoming.
Powell pulled punches on the drastic change over 150 years of landscape. He wrote under this photograph:
None of the trees were there in 1865, most of the gravestones seemed to still be there, although the graveyard was overrun, there’s a house that isn’t there any longer, but you can see the hint of the church steeple poking over the house in the original shot. That’s still there. The canal is completely dry and there’s a tunnel through the mountain. The bridge doesn’t exist anymore, either. All in all, I’d say there isn’t much left from 1865 Harper’s Ferry, honestly.
None of the mixed reality technology sophisticated as the ones we have today existed back in the 60's, when virtual reality research was beginning in its infancy. There are no more bulky trackers any longer, replaced by smaller and smaller components that are becoming more mass produced.
Perhaps 50 years from now, what will become the standard mixed reality platform will be so alien to us 50 years ago, without a trace. We see them only through remixing the past.
Tags: Mixed Reality, Data Visualisation, Science, Presentation, CSIRO
This course was given by Juan Miguel de Joya, Neil Trevett, and I are Web3D 2017 in Brisbane, Australia.
WebGL 2.0 has landed, and the future of graphics on the web is here. In this workshop, we will introduce the rendering specification for browsers. New features in WebGL 2.0 include geometry instancing, transform feedback, and 3D textures will be covered in depth.
Full course material and slides are available here.
The following points are distilled from my experience in attending hackathons so far. I hope this will be useful to those who are new to the format.
Tags: Hackthons
This talk was originally given at Melbourne Augmented Reality Meetup, hosted by CSIRO Data61. I wrote up the transcript while my mind was fresh.
If you haven't been to Melbourne Museum lately, check out the CSIRAC, Australia's first computer in 1949. It was a significant investment in service over 15 years, used punch cards, and it could compute 1000 times the number of equations a single person could do in one day.
Punch cards were data, but easily replaceable: someone could (with patience) spot a mistake, replace a card, and feed it back into a machine. Once the data was verified and correct, it could be replicated and posted to other people, and they could run similar calculations.
Computer terminals was the next leap. You could type on a physical keyboard, and the computer would give you a response. This kind of instantaneous interactivity changed the way scientists work. What wasn't possible before was suddenly something that could be done in a much shorter period of time.
Computer scientists and programmers weren't the only people who embraced computers. Storytellers, filmmakers, and musicians alike went to learn programming, or collaborated with programmers. They created interactive experiences through digital media, and broke new grounds.
Today we live in a world where photorealistic experiences are coming at interactive rates. Computing power is continuing to rise, more affordable, and more efficient. It's not just automation and interactions, but also digitisation of processes and workflows.
Today we live in a world where motion capture technology from 20 years ago, which cost $100,000+, can be bought for less than $2,000 at home. Authoring and content creation tools like Unity are maturing, and there are many efforts in open source and commercial solutions building on each other.
Improvisation became innovation. ReMoTe is such a synthesis where the remote worker's location is streamed back to an instructor, who can project stereoscopic overlays, like their hands through Kinect, back to the worker on site. It was cheaper to collaborate remotely than flying someone over. It was like a Skype call, but in 3D, and in front of you.
Zebedee is a hand-held LIDAR scanner (you'd still lug a trolley behind you) that can digitise an entire location by simply walking around it. The laser spinner continues to record the point cloud, and the spring head gives it plenty of resolution and reach.
I want to explore what mixed reality can bring to the table for scientists and researchers. The work above was done by Eleanor McMurtry, one of my students last summer. Given the data, we can quickly tell where is the front door, the halls, and leave each other a note. 3D data is much more intuitive than floor plans, and don't rely on existing knowledge of the place.
My colleagues Matt Adcock and Stuart Anderson set this insect photo station up at Data61. They mounted it on a spinning disc, and automated the whole picturing process. The result was 100+ photos for a single insect used to reconstruct it in 3D, preserving the fragile sample for future use.
The result was that many insects can now be viewed up close, in the angle natural to the interested scientist. They no longer have to be careful about using the sample. Better yet—the process of measuring the length of the limbs and sizes of the hulks could also be automated, saving even more time down the track.
CluckAR, developed by Choice Australia, is a consumer-facing mobile application. It works for all Australian egg providers found at supermarkets, and it will show you how free range the chickens really are.
We are still exploring what mixed reality can bring for research, and we've learned a few lessons along the way. I wanted to share with you some of the lessons we learned, and tell you a bit more about where things are headed, especially for mixed reality on the web.
Mixed reality is a visualisation platform. We can bring existing workflows from the entertainment industry to data science and visual analytics. Real-time exploration of historic and new context is here, and the tools are becoming more accessible for modern developers and non-developers alike. For scientific visualisation, the added bonus is all the existing tools we can bring in to our research.
Mixed reality encourages physical exploration. There is something curious about being able to get up and close to your dataset, and look at it in ways you are naturally inclined to look. It is sometimes difficult to get the angle you want with a mouse and keyboard combo, and nothing is more natural than your own body movement and your own eyes.
Mixed reality will be delivered across the web. Just as movies and music already moved to streaming platforms, digital content will also benefit from moving to streaming content. For one, you no longer have to wait for a massive download to get things going. Your browser will download the things you can see first, and fill in objects as they become available.
There are much more to see where technology is headed. It is not likely that in 5 years we will all have contact lenses that deliver mixed reality magic. But the devices are maturing, tools are shared and built upon, and web services are increasingly offered for others to create on. Right now, web technologies like After The Flood are good enough to deliver immersive experiences.
We have the people just as good, if not better than average, regardless of the obstacles we face in government, internet speeds, and hardware. No matter where you live, as long as you have access to tools to develop on mixed reality technology, you have the opportunity to shape the industry.
Device manufacturers are still pushing hard, at least in my experience, for early adopting developers. Mixed reality will not replace our existing reality, but it will serve as an enhancement for our lives, like Google Translate, with real-time text replacements. There's no app yet that we cannot live without, but many are quite handy in the right context.
I hope I have given you some glimpses of what we have worked on, where mixed reality is going in terms of existing technologies, and what may be coming in the future.
Tags: Mixed Reality, Science
View the presentation material here (feel free to skip signing in).
The next generation of computer graphics on the web, an open standard known as WebGL 2.0, has landed in Firefox, Chrome, and Opera, with other browsers fast approaching. This enables an additional layer of GPU-accelerated content.
Our browsers are already GPU-accelerated; CSS3 Transform is one well-loved example. As designers, developers and creatives, we're empowered by the community that we work with, and the community creates tools to empower our work. So what does WebGL 2.0 mean for the web community?
In this talk, we will discuss the theory of modern design of the graphics rendering pipeline using WebGL as a framework, and go through example code to concretise the theory. Then, we will look at how ThreeJS uses WebGL 2.0, as well as new tools that might change the face of the 3D web for years to come.
This presentation was originally given at Computer Graphics on the Web in Melbourne. Video kindly provided by Common Code.
Tags: WebGL, Presentation
I originally wrote this email response to an internal query, but I thought to post it publicly so many may benefit. Please leave a comment and share your thoughts with me. The article has been modified with minor revisions for the public audience.
Check out Mapzen Vector Tile Service which can be queried as GeoJSON, and transformed into other vector formats that OpenLayers, D3.js, etc. can understand. There are also increasingly more and more vector tile containers, for instance Mapbox, Mapzen, Terria, Cesium, and Leaflet has a plugin.
Just like the recent announcement of deck.gl v4.0 being released, these mapping services are native to the web, accelerated by WebGL. The amount of visualisation power through computation has a high potential.
There are borderline interaction papers, but some interesting ones may be what came out of the W4A Accessibility Hack. The delegate winner, Kieran Mesquita, created a Docker container for TAO (Sam would be proud). Normally, it takes 1-2 hours of setting TAO up just to get it working, but now you can do it in 15 minutes.
TAO is an open source program that can assess WCAG accessibility conformance, and as all government agency websites have strict accessibility requirements, this will be real handy down the line.
Terry Crowley wrote an excellent article called Education of a Programmer and refers to Joel Spolsky’s “Law of Leaky Abstractions”, and managing software complexity. Both are long reads but totally worth the time.
Frederick Brooks Jr. wrote a journal article titled No Silver Bullet—Essence and Accident in Sofware Engineering in 1995. He listed the essences (fundamentals to creating software) and accidents (side effects of using software), and talked about how different solutions may only solve a small set of a general problem, and he defines them well. The 22 years old article stands well against the test of time, I think.
Here are the factors to consider:
What does it mean to browse a 3D/VR website vs a normal flatscreen website? Does that mean people will start strapping phones to their faces? I doubt it.
People will start using phones in ways we haven’t designed the phones for ever before, because there will be incentives to repurpose a WebVR/WebAR enabled phone to do specific tasks. If there is a need to be filled, be it attending a social event care for animal welfare or sharing digital worlds they have already begun.
The next big adjacent idea, borrowing the term from James McQuivey, will be contributing to the 3D Web ecosystem, while being fit for purpose. It could be tools creation, enhancing rendering pipeline, or a platform for publishing. Whatever it is, we’re at the right moment for it.
Tags: Maps, Web, Accessibility, Virtual Reality
I was invited to be on a mixed reality panel at WWW2017 with Mark Pesce, Viveka Weiley, Stuart Anderson, and Kate Raynes-Goldie. The panel was well received and had a good turnout, following Mark Pesce’s keynote on Mixed Reality Service that morning.
As part of the conference, I attended the Trust Factory, and learned a lot about governance, policy, and research on trust and security on the web. The most notable one was probably W3C Verifiable Claim, a third-party certifiable process that prove some information to be genuine. I also learned about distributed ledgers, and by the way, did you know that Amazon checkouts do not go through a centralised choke point, and they have to deal with possible backorders?
I also attended a workshop held by Rossano Schifanella (who recommended me to try out Mapzen, good idea), Bart Thomee, and David Shamma. (They are all ex-Yahoo! engineers who went separate ways.) They delivered a crash course on geography/cartography literature, geospatial analytics, and visaulisation. I also met Martin Tomko at the workshop, who is doing very interesting work at Unimelb, and hope to catch coffee with him one day.
For the rest of the conference I saw various industry and research talks. Some interesting ones include setting up an internet service for people on Mars (fun fact: it takes 8–48 minutes to relay), and Gmail uses inbox history to predict new emails and filter spams. You can see the entire WWW 2017 proceedings here.
Last but not the least, the day before our panel, Nature published a climate forcing paper discussing a strong correlation between solar intensity and CO2 concentration in our atmosphere. We decided to do a 3-hour hackathon to produce a visualisation of the paper to explain the effect by visuals, all done in WebGL. Hope you like it.
Tags: Conference, Panel, Debrief
Read the updated material here. (You can skip sign in to Dropbox.)
Know a bit of programming, but not sure how to start in Javascript? This talk is for you. Originally recorded for CSIRO show'n'tell, I hope this talk will be useful to others who are comfortable following video tutorials.
Have questions or comments? Leave it on the Paper.
Tags: Javascript, Presentation
I have a new GPG Key created on 21 March, 2017.
Fingerprint: 200F 0D74 C777 7235 6758B547 FDE4 31F5 EB11 C755
Keyserver link is also available.
Tags: Security
Above: Jacob Bijani & Pasquale D'Silva dressed up as Dracula, talking about animation and game development at Direction 2016.
This was my 7th year in attending Web Directions, now known as Direction. A two-day conference (plus masterclasses) with inspiring design talks and passionate attendees. It's my personal favourite and how my career was launched in Sydney.
As I did in the past, here is a series of speaker and MC portraits I took during the conference. Missed it? Don't worry, Ben Buchanan compiled a large list of notes for you. The ending keynote is also available as a post-speech transcript with slides, arranged by the amazing Maciej Ceglowski.
Special thanks to all the attendees, speakers, volunteers, venue and conference organisers who made this event possible and awe-striking this year.
Tags: Web Directions, Conference, Photography
Nerd Nite Melbourne was kind to have me to speak on the 6th of December, 2016. I gave a talk about the history of roguelike games and how they came to be. Below is my recording as well as the slides to follow along.
If you are in Melbourne and interested to join our meetup, Computer Graphics on the Web, check it right out!
Tags: Roguelikes, History, Timeline, Data Visualisation
As part of my doctorate research journey, I was lucky to be back at Meaningful Play 2016 and this time, presenting a full paper on roguelike games.
I developed Roguelike Universe with some D3.js data visualisation goodness, and stretching some boundaries. It is open source on Github, too.
You can view the slides online, or below:
Tags: Roguelikes, Inspiration Network
I was invited to be part of CityLab's design sprint in October 2016 to brainstorm and prototype a community engagement software solution with a group of 10.
Bringing together City of Melbourne employees with experts from creative agencies and the technology sector, we're moving at pace—from concept to prototype in 3.5 days—to test and trial new ideas and city services for the community.
Tags: Interaction Design, Future
Listen to the talk or download the MP3:
You can apply design principles in your work, too.
I gave a talk at the Learning Analytics and Education Data Seminar, University of New South Wales. Half or more people in the audience were educators and instructional designers in academia. While this talk was structured for them, the principles are applicable for other fields.
In this talk, "Design as Invitation to Interaction", I presented three barriers to designed objects, and used case studies as well as examples of good and bad design to show how they can be over come. They are:
I referenced several video productions and documents I was involved during the talk. You can follow along:
You can find a copy of my slides (PDF), and slides below.
D3.js is a popular data visualisation framework for Javascript and the web. In July 2016, Mike Bostock released the new version, v4. To celebrate the new release, I gave this talk at the CSIRO data science webinar. The content covers: what is D3? Why do people use data visualisation? What is it for? I show examples of recent works, and point to a couple of resources. Then, D3's enter and exit pattern is briefly outlined. I also talk about the modularisation of D3 v4, as well as a couple of libraries that go with it.
Tags: D3, Data Visualisation, Talks
Jump to Glide is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.