Reality Mix-up: Science Applications

Reality Mix-up: Science Applications

This talk was originally given at Melbourne Augmented Reality Meetup, hosted by CSIRO Data61. I wrote up the transcript while my mind was fresh.

If you haven't been to Melbourne Museum lately, check out the CSIRAC, Australia's first computer in 1949. It was a significant investment in service over 15 years, used punch cards, and it could compute 1000 times the number of equations a single person could do in one day.

Punch cards were data, but easily replaceable: someone could (with patience) spot a mistake, replace a card, and feed it back into a machine. Once the data was verified and correct, it could be replicated and posted to other people, and they could run similar calculations.

Computer terminals was the next leap. You could type on a physical keyboard, and the computer would give you a response. This kind of instantaneous interactivity changed the way scientists work. What wasn't possible before was suddenly something that could be done in a much shorter period of time.

caves of qud

Computer scientists and programmers weren't the only people who embraced computers. Storytellers, filmmakers, and musicians alike went to learn programming, or collaborated with programmers. They created interactive experiences through digital media, and broke new grounds.

Today we live in a world where photorealistic experiences are coming at interactive rates. Computing power is continuing to rise, more affordable, and more efficient. It's not just automation and interactions, but also digitisation of processes and workflows.

Today we live in a world where motion capture technology from 20 years ago, which cost $100,000+, can be bought for less than $2,000 at home. Authoring and content creation tools like Unity are maturing, and there are many efforts in open source and commercial solutions building on each other. 

Improvisation became innovation. ReMoTe is such a synthesis where the remote worker's location is streamed back to an instructor, who can project stereoscopic overlays, like their hands through Kinect, back to the worker on site. It was cheaper to collaborate remotely than flying someone over. It was like a Skype call, but in 3D, and in front of you.

image 8.jpg

Zebedee is a hand-held LIDAR scanner (you'd still lug a trolley behind you) that can digitise an entire location by simply walking around it. The laser spinner continues to record the point cloud, and the spring head gives it plenty of resolution and reach.

I want to explore what mixed reality can bring to the table for scientists and researchers. The work above was done by Eleanor McMurtry, one of my students last summer. Given the data, we can quickly tell where is the front door, the halls, and leave each other a note. 3D data is much more intuitive than floor plans, and don't rely on existing knowledge of the place.

image (1).png

My colleagues Matt Adcock and Stuart Anderson set this insect photo station up at Data61. They mounted it on a spinning disc, and automated the whole picturing process. The result was 100+ photos for a single insect used to reconstruct it in 3D, preserving the fragile sample for future use.

The result was that many insects can now be viewed up close, in the angle natural to the interested scientist. They no longer have to be careful about using the sample. Better yet—the process of measuring the length of the limbs and sizes of the hulks could also be automated, saving even more time down the track.

CluckAR, developed by Choice Australia, is a consumer-facing mobile application. It works for all Australian egg providers found at supermarkets, and it will show you how free range the chickens really are. 

We are still exploring what mixed reality can bring for research, and we've learned a few lessons along the way. I wanted to share with you some of the lessons we learned, and tell you a bit more about where things are headed, especially for mixed reality on the web.

Mixed reality is a visualisation platform. We can bring existing workflows from the entertainment industry to data science and visual analytics. Real-time exploration of historic and new context is here, and the tools are becoming more accessible for modern developers and non-developers alike. For scientific visualisation, the added bonus is all the existing tools we can bring in to our research.

Mixed reality encourages physical exploration. There is something curious about being able to get up and close to your dataset, and look at it in ways you are naturally inclined to look. It is sometimes difficult to get the angle you want with a mouse and keyboard combo, and nothing is more natural than your own body movement and your own eyes.

Mixed reality will be delivered across the web. Just as movies and music already moved to streaming platforms, digital content will also benefit from moving to streaming content. For one, you no longer have to wait for a massive download to get things going. Your browser will download the things you can see first, and fill in objects as they become available.

aftertheflood-O3.gif

There are much more to see where technology is headed. It is not likely that in 5 years we will all have contact lenses that deliver mixed reality magic. But the devices are maturing, tools are shared and built upon, and web services are increasingly offered for others to create on. Right now, web technologies like After The Flood are good enough to deliver immersive experiences.

We have the people just as good, if not better than average, regardless of the obstacles we face in government, internet speeds, and hardware. No matter where you live, as long as you have access to tools to develop on mixed reality technology, you have the opportunity to shape the industry.

Device manufacturers are still pushing hard, at least in my experience, for early adopting developers. Mixed reality will not replace our existing reality, but it will serve as an enhancement for our lives, like Google Translate, with real-time text replacements. There's no app yet that we cannot live without, but many are quite handy in the right context.

I hope I have given you some glimpses of what we have worked on, where mixed reality is going in terms of existing technologies, and what may be coming in the future.

Guide to WebGL 2.0

View the presentation material here (feel free to skip signing in).

The next generation of computer graphics on the web, an open standard known as WebGL 2.0, has landed in Firefox, Chrome, and Opera, with other browsers fast approaching. This enables an additional layer of GPU-accelerated content. 

Our browsers are already GPU-accelerated; CSS3 Transform is one well-loved example. As designers, developers and creatives, we're empowered by the community that we work with, and the community creates tools to empower our work. So what does WebGL 2.0 mean for the web community?

In this talk, we will discuss the theory of modern design of the graphics rendering pipeline using WebGL as a framework, and go through example code to concretise the theory. Then, we will look at how ThreeJS uses WebGL 2.0, as well as new tools that might change the face of the 3D web for years to come.

This presentation was originally given at Computer Graphics on the Web in Melbourne. Video kindly provided by Common Code.

Maps Visualisation, Web Accessibility, and the Future of the 3D Web

Maps Visualisation, Web Accessibility, and the Future of the 3D Web

I originally wrote this email response to an internal query, but I thought to post it publicly so many may benefit. Please leave a comment and share your thoughts with me. The article has been modified with minor revisions for the public audience.

Maps Visualisation

Check out Mapzen Vector Tile Service  which can be queried as GeoJSON, and transformed into other vector formats that OpenLayers, D3.js, etc. can understand. There are also increasingly more and more vector tile containers, for instance Mapbox, Mapzen, Terria, Cesium, and Leaflet has a plugin

Just like the recent announcement of deck.gl v4.0 being released, these mapping services are native to the web, accelerated by WebGL. The amount of visualisation power through computation has a high potential.

Accessibility for the Web

There are borderline interaction papers, but some interesting ones may be what came out of the W4A Accessibility Hack. The delegate winner, Kieran Mesquita, created a Docker container for TAO (Sam would be proud). Normally, it takes 1-2 hours of setting TAO up just to get it working, but now you can do it in 15 minutes. 

TAO is an open source program that can assess WCAG accessibility conformance, and as all government agency websites have strict accessibility requirements, this will be real handy down the line.

Software Engineering

Terry Crowley wrote an excellent article called Education of a Programmer  and refers to Joel Spolsky’s “Law of Leaky Abstractions”, and managing software complexity. Both are long reads but totally worth the time.

Frederick Brooks Jr. wrote a journal article titled No Silver Bullet—Essence and Accident in Sofware Engineering in 1995. He listed the essences (fundamentals to creating software) and accidents (side effects of using software), and talked about how different solutions may only solve a small set of a general problem, and he defines them well. The 22 years old article stands well against the test of time, I think.

The Future of 3D/VR on the Web

Here are the factors to consider:

  • The web is fundamentally a publishing platform. The strongest strength it has is to deliver content across the globe (and eventually, in space, no doubt). For 3D/VR, the ability to streaming content is continuing to improve.
     
  • WebGL is the principle driver of web-based graphics rendering API. Version 2.0 landed last month in 3 major desktop browsers.
     
  • WebVR is being standardised by W3C (360 videos) and Khronos (OpenXR). Plus the open source effort (OpenVR) in making peripherals available to any desktop machine, I think we’ll see many devices becoming more available as graphics chips get more energy-efficient.
     
  • The leading company in this effort is AltspaceVR, who has seen 1,000+ concurrent users celebrating sports events and presidential campaigns. What is interesting about their technology is that they are built on A-Frame, an HTML-extension library, which means any of their virtual objects can be annotated and fit accessibility requirements. I’m super excited to see what they are doing to translate visuals to text, sound, or any other sensory format.
     
  • It is important to remember that Mixed Reality will continue to grow. This can be a virtual overlay through a transparent, or opaque video feed. 
     
  • We will see a few major paradigm shifts in the coming years:
  1. Smartphones will continue to overtake desktops / laptops. Therefore development efforts will continue to spill over.
     
  2. Mouse and keybaord? Touch screens? Accelerometers? Remote controllers? There will be a standard interaction paradigm across these different inputs.
     
  3. Authoring 3D content will no longer be constrained to the expert users of Maya, Blender, and the like, although these software will continue to be the industry standard for high quality content. TiltBrush, Medium, and SkulptGL are the early drivers of low-level entry content creation. 

What does it mean to browse a 3D/VR website vs a normal flatscreen website? Does that mean people will start strapping phones to their faces? I doubt it. 

People will start using phones in ways we haven’t designed the phones for ever before, because there will be incentives to repurpose a WebVR/WebAR enabled phone to do specific tasks. If there is a need to be filled, be it attending a social event  care for animal welfare or sharing digital worlds they have already begun. 

The next big adjacent idea, borrowing the term from James McQuivey, will be contributing to the 3D Web ecosystem, while being fit for purpose. It could be tools creation, enhancing rendering pipeline, or a platform for publishing. Whatever it is, we’re at the right moment for it.

WWW 2017

I was invited to be on a mixed reality panel at WWW2017 with Mark PesceViveka WeileyStuart Anderson, and Kate Raynes-Goldie. The panel was well received and had a good turnout, following Mark Pesce’s keynote on Mixed Reality Service that morning.

As part of the conference, I attended the Trust Factory, and learned a lot about governance, policy, and research on trust and security on the web. The most notable one was probably W3C Verifiable Claim, a third-party certifiable process that prove some information to be genuine. I also learned about distributed ledgers, and by the way, did you know that Amazon checkouts do not go through a centralised choke point, and they have to deal with possible backorders?

I also attended a workshop held by Rossano Schifanella (who recommended me to try out Mapzen, good idea), Bart Thomee, and David Shamma. (They are all ex-Yahoo! engineers who went separate ways.) They delivered a crash course on geography/cartography literature, geospatial analytics, and visaulisation. I also met Martin Tomko at the workshop, who is doing very interesting work at Unimelb, and hope to catch coffee with him one day.

For the rest of the conference I saw various industry and research talks. Some interesting ones include setting up an internet service for people on Mars (fun fact: it takes 8–48 minutes to relay), and Gmail uses inbox history to predict new emails and filter spams. You can see the entire WWW 2017 proceedings here.

Last but not the least, the day before our panel, Nature published a climate forcing paper discussing a strong correlation between solar intensity and CO2 concentration in our atmosphere. We decided to do a 3-hour hackathon to produce a visualisation of the paper to explain the effect by visuals, all done in WebGL. Hope you like it.

Direction 2016

Direction 2016

Above: Jacob Bijani & Pasquale D'Silva dressed up as Dracula, talking about animation and game development at Direction 2016.

This was my 7th year in attending Web Directions, now known as Direction. A two-day conference (plus masterclasses) with inspiring design talks and passionate attendees. It's my personal favourite and how my career was launched in Sydney.

As I did in the past, here is a series of speaker and MC portraits I took during the conference. Missed it? Don't worry, Ben Buchanan compiled a large list of notes for you. The ending keynote is also available as a post-speech transcript with slides, arranged by the amazing Maciej Ceglowski.

Special thanks to all the attendees, speakers, volunteers, venue and conference organisers who made this event possible and awe-striking this year.

CityLab Melbourne Design Sprint

I was invited to be part of CityLab's design sprint in October 2016 to brainstorm and prototype a community engagement software solution with a group of 10.

Bringing together City of Melbourne employees with experts from creative agencies and the technology sector, we're moving at pace—from concept to prototype in 3.5 days—to test and trial new ideas and city services for the community.

Interaction Design for Education Designers

Interaction Design for Education Designers

Listen to the talk or download the MP3:

You can apply design principles in your work, too.

I gave a talk at the Learning Analytics and Education Data Seminar, University of New South Wales. Half or more people in the audience were educators and instructional designers in academia. While this talk was structured for them, the principles are applicable for other fields.

In this talk, "Design as Invitation to Interaction", I presented three barriers to designed objects, and used case studies as well as examples of good and bad design to show how they can be over come. They are:

  1. Mistimed, misplaced, misused
  2. It's dangerous to go alone!
  3. Technology is creepy

I referenced several video productions and documents I was involved during the talk. You can follow along:

You can find a copy of my slides (PDF), and slides below.

Moving Towards D3 v4

D3.js is a popular data visualisation framework for Javascript and the web. In July 2016, Mike Bostock released the new version, v4. To celebrate the new release, I gave this talk at the CSIRO data science webinar. The content covers: what is D3? Why do people use data visualisation? What is it for? I show examples of recent works, and point to a couple of resources. Then, D3's enter and exit pattern is briefly outlined. I also talk about the modularisation of D3 v4, as well as a couple of libraries that go with it.

Smoke and Fire

Smoke and Fire

Smoke and Fire (live website) is an interactive visualisation of air pollution in Australia, particularly shipping plume along the east coast of New South Wales. The aim was to present the data to experts as well as the general public, communicating issues around air pollution and air quality monitoring. 

I developed WebGL-based visualisation along with a set of Python scripts to convert the pre-processed satellite data into a web-friendly format. 

To illustrate air pollution issues, we took a vector-based representation of the globe, overlaid with opaque colours representing different types of pollution sources.

The brighter the colours are, the denser fine particles filling that region. Blue represents PM2.5 particle, or particles at 2.5 micrometers diameters. Yellow represents shipping plumes and red spots are strong emission sources where a bushfire has occurred, or a significant amount of smoke has been registered. 

The visualisation was exhibited at the Clean Air Society of Australia and New Zealand (CASANZ) conference in September, 2015.

Smoke and Fire made it into Kantar Information is Beautiful Awards longlist.

That's me in the middle.

That's me in the middle.

Source code is open source on Github here.

I also gave a talk on the development and design process of Smoke and Fire at OzViz 2015. You can see the entire talk with presenter notes here.

In the future we hope to update it as new data become available.

Wordscapes

Wordscapes

Above: "Laughter", based on a community translation of Henry Bergson's 1900 essay, "Le Rire".

This year, Phil Gough and I co-authored digitally generated artworks based on public domain books archived by Project Gutenberg. We are proud to say that our work is being featured by the SIGGRAPH Digital Arts Community

Below: "Knowledge", based on Alexander Phillips' 1915 essays published in "Towards a Theory of Knowledge."

Next: "Morals" (unpublished), based on a 1912 reprint edition of David Hume's 1777 work, "An Enquiry Concerning the Principles of Morals".

Pause Fest 2015

Pause Fest 2015

I had the amazing chance to drop by Pause Fest for Saturday's content. The event was held in Federation Square in Melbourne, with over 1,100 attendees registered. They totally smashed it!

Photos feature (not exhaustive): Tim KTim Büesing, Sarah Rowan Dahl, Alexander ChungSimon PembertonJess ScullyMark PullyblankJames Noble, Alex Young.

Happy Valentine's Day.