Mixed Reality and Scientific Visualisation

Mixed Reality and Scientific Visualisation

This post is adapted from a talk I gave at the C3DIS conference in July 2017. 

In Jason E. Powell's work, "Looking Into The Past", he captured over 200 old photographs overlayed on roughly the perspective where they were taken. His work took the internet by storm in 2013. This post was partly inspired by his work.

When interviewed by BBC, Powell explained that he was doing a project to rephotograph some of the shots from the Library of Congress. He held up the photo to line up the angle, and a moment of spark came to his mind—and took the shot holding the print. For particular work, he wrote about how he felt about the now-missing 1950 Leader Theatre:

This photograph breaks my heart.

Washington, DC used to have so much character. But we don't have anything like the set of buildings in this original photograph anymore. And, what's more, we never will again. And I missed it! This project is the only way I can ever see the way Washington, DC used to look, back when you could build four incredible buildings like these next to each other.

It makes me very sad. I want to step into this photograph and go join those folks in line. I want to eat at the Acropolis Cafe. I want to visit the Gayety. And yet, the only thing left from this scene is the red brick building on the left. You cannot convince me that Washington, DC is better off by having the modern blahbuilding taking up this space instead.

Through the lens of the camera, the streets were rephotographed for a historical context. I found this notion of rephotograph fascinating. Powell's work brought the past to the present, re-encoded physical space. 

Powell's work is, in my broad definition, remixing reality. This can be extended from past->present, to present->present, to future->present. Essentially, time is irrelevant, and what glues the two spaces together is the thriving narrative that makes meaning.

In this post I would like to examine mixed reality in a broad sense. I am intentionally blurring the boundary of what makes up mixed reality in order to piece together a bigger picture. More generally, I would like to discuss the nagging feeling when we first get into an alternative reality that feels unnatural to all of us.

From a typical marketing image of Teomirn, an application that teaches people to play the piano, we can dissect the atypical emotions provoked within. For instance, kids I know would probably be jumping onto the piano and trying to play along, disrupting the experience in the process. This photo of a family is just a tad unemotional.

Cognitive dissonance happens when we encounter the unexpected. We take some time to adjust, and recalibrate our expectations to match the experience. In Powell's work, overlaying the past to the present creates a cognitive dissonance, and provokes curiosity, and understanding. For Teomirn, the virtual player is overlayed, but the affect is not quite the same. 

Mixed reality applications are assistive by nature. Take Google Lens for example, it marries information retrieval (reverse image search on Google Maps) with the smartphone camera and display:

For the longest time, humans have learned how to create tools as an extension of themselves. It's how we took over the planet. When we think of the term "assistive technologies", we tend to associate it with aiding disability. Perhaps in the distant, utopian future, assistive technologies would become necessary enhancements. 

But would mixed reality replace reality? I previously wrote about the obstacles that are inherent with virtual reality, and I think the question is missing the most important point. We can't escape our reality. We're vendor-locked, constantly logged in, and are born into this environment. Humans are habitual creatures, and we might prefer the default reality after all.

Rather, I believe mixed reality should supplement reality. Just as other technologies—mobile browsers, for instance—have made our lives a little bit easier, they are not stand-in replacements.

The strengths of mixed reality are to connect our body sensory inputs and outputs, and in some cases enhance them with other tools, like machine learning. Mixed reality by itself is only a platform, an empty shell ready to be worn, a coat where buttons have logical rules, but not hard constraints. 

Mixed reality platforms come in many flavours. Here are some examples that are representative and popular, and they are by no means exhaustive. The dance below was by Marpi, with Hien Hyunh dancing:

In their performance, Marpi connects Hien's hand movements with two virtual geometries projected behind. The projected lights create an additional energy, and without it, the performance is not the same. Marpi and Hien's exaggerate what is perceived from the dance.

Another work by Hanley Weng marries Apple's CoreML, a machine learning library, with ARKit, an augmented reality framework, on the iPhone:

Weng feeds the camera to a machine that attempts to identify what it is. Upon recognition, the name is labelled in proximity of the object. The potential applications are high and broad, and very exciting.

In Fragments, a game made for the Microsoft HoloLens, the player acts as a detective trying to figure out where the murder scene has taken place. In the animation, we see the player inspecting a transit schedule on their desk. Their office has been transformed into a crime scene, reappropriated for the context at play.

Applying the concept of reappropriating a space to scientific visualisation, mixed reality platforms become immensely meaningful as a communication tool.

I bend the purpose of scientific visualisation from Wikipedia for my use: scientific visualisation aims to graphically illustrate scientific data to enable scientists, and non-scientists, to better understand and glean insight from their data.

In other words, mixed reality platforms can act as a knowledge translation, a bridge to reach the scientific insight that might be normally inaccessible. However, the change in frame of mind, perspectives, is difficult to measure. We still have yet to see what specific values mixed reality will bring to the table, but we're at the right time to explore it.

The above animation is a mockup I produced at the end of 2016. The map shows the Victoria-Tasmania domain in Australia, and the smoke is a crude signal of what happened in a major January 2016 Tasmanian bush fire. The smoke went all the way to Victoria over 10 days. Emergency calls were recorded to complain about the smoky air.

Fast-forwarding 6 months later, working one day a week I have gotten a bunch of integrations with historical data: wind vectors, smoke magnitudes across 16 elevations, satellite imagery, ground observations from weather stations, as well as playback controls. Thanks to the scientist Martin Cope who provided me with the data. 

A version not too dissimilar to the above prototype was brought to International Biomass Burning Initiative Workshop, Boulder, Colorado. People who tried the demo loved where it is going, and I was happy to hear their responses. It's not more than a self-pat on the back exercise, of course, but it was certainly encouraging for a solo developer working one day a week on this.

What I discovered through my personal experience was that mixed reality provides insight with mobility. It's one thing to display virtual content on a monitor, but nothing comes more natural for anyone than to walk around the model and check it out from up close, side ways, and at an angle. Mixed reality not only re-encodes our spaces, but also provides an avenue of discovery that befits the purpose of scientific visualisation.

Let's talk about some of the unresolved challenges in mixed reality. It is important to recognise that we are still in early stages of what's coming, and identify the gaps.

  1. High-end mixed reality still requires a prohibited cost of $3,000 or above. A set of Microsoft HoloLens will easily set you back that much, and if you were to invest in a tethered headset, you still need a powerful enough computer to run it. Oculus has announced a standalone headset for $200 in 2018, and I am confident that others will follow suit.
     
  2. Hardware design inherently constraints mixed reality platforms. One of the things new developers overlook is the chipset and memory limitations of a standalone—or, shall we say, console—headset development. For instance, the Microsoft HoloLens only has 640MB of VRAM, unlike most standard GPU cards which come with 3-6 GB VRAM today. Just as point 1), I expect the limitation will slowly be lifted to match laptop specs in 3-5 years.
     
  3. Mixed reality interfaces have different affordances, and their own limitations. A lot of companies are hung up on haptics feedback, putting invisible forces where a virtual object would not be able to exert. They are missing the actual problem: there is not a physical object there in the first place. Instead, we saw the rise of virtual reality like the HTC Vive, who put controllers in our hands. Mixed reality will need to go beyond visual overlays and blend into our physical world with sensors rather than actuators, which are far more flexible in use cases.

In summary, please allow me to indulge in another Jason E. Powell's rephotograph, this time in Wyoming.

Powell pulled punches on the drastic change over 150 years of landscape. He wrote under this photograph:

None of the trees were there in 1865, most of the gravestones seemed to still be there, although the graveyard was overrun, there’s a house that isn’t there any longer, but you can see the hint of the church steeple poking over the house in the original shot. That’s still there. The canal is completely dry and there’s a tunnel through the mountain. The bridge doesn’t exist anymore, either. All in all, I’d say there isn’t much left from 1865 Harper’s Ferry, honestly.

None of the mixed reality technology sophisticated as the ones we have today existed back in the 60's, when virtual reality research was beginning in its infancy. There are no more bulky trackers any longer, replaced by smaller and smaller components that are becoming more mass produced.

Perhaps 50 years from now, what will become the standard mixed reality platform will be so alien to us 50 years ago, without a trace. We see them only through remixing the past.

Reality Mix-up: Science Applications

Reality Mix-up: Science Applications

This talk was originally given at Melbourne Augmented Reality Meetup, hosted by CSIRO Data61. I wrote up the transcript while my mind was fresh.

If you haven't been to Melbourne Museum lately, check out the CSIRAC, Australia's first computer in 1949. It was a significant investment in service over 15 years, used punch cards, and it could compute 1000 times the number of equations a single person could do in one day.

Punch cards were data, but easily replaceable: someone could (with patience) spot a mistake, replace a card, and feed it back into a machine. Once the data was verified and correct, it could be replicated and posted to other people, and they could run similar calculations.

Computer terminals was the next leap. You could type on a physical keyboard, and the computer would give you a response. This kind of instantaneous interactivity changed the way scientists work. What wasn't possible before was suddenly something that could be done in a much shorter period of time.

caves of qud

Computer scientists and programmers weren't the only people who embraced computers. Storytellers, filmmakers, and musicians alike went to learn programming, or collaborated with programmers. They created interactive experiences through digital media, and broke new grounds.

Today we live in a world where photorealistic experiences are coming at interactive rates. Computing power is continuing to rise, more affordable, and more efficient. It's not just automation and interactions, but also digitisation of processes and workflows.

Today we live in a world where motion capture technology from 20 years ago, which cost $100,000+, can be bought for less than $2,000 at home. Authoring and content creation tools like Unity are maturing, and there are many efforts in open source and commercial solutions building on each other. 

Improvisation became innovation. ReMoTe is such a synthesis where the remote worker's location is streamed back to an instructor, who can project stereoscopic overlays, like their hands through Kinect, back to the worker on site. It was cheaper to collaborate remotely than flying someone over. It was like a Skype call, but in 3D, and in front of you.

image 8.jpg

Zebedee is a hand-held LIDAR scanner (you'd still lug a trolley behind you) that can digitise an entire location by simply walking around it. The laser spinner continues to record the point cloud, and the spring head gives it plenty of resolution and reach.

I want to explore what mixed reality can bring to the table for scientists and researchers. The work above was done by Eleanor McMurtry, one of my students last summer. Given the data, we can quickly tell where is the front door, the halls, and leave each other a note. 3D data is much more intuitive than floor plans, and don't rely on existing knowledge of the place.

image (1).png

My colleagues Matt Adcock and Stuart Anderson set this insect photo station up at Data61. They mounted it on a spinning disc, and automated the whole picturing process. The result was 100+ photos for a single insect used to reconstruct it in 3D, preserving the fragile sample for future use.

The result was that many insects can now be viewed up close, in the angle natural to the interested scientist. They no longer have to be careful about using the sample. Better yet—the process of measuring the length of the limbs and sizes of the hulks could also be automated, saving even more time down the track.

CluckAR, developed by Choice Australia, is a consumer-facing mobile application. It works for all Australian egg providers found at supermarkets, and it will show you how free range the chickens really are. 

We are still exploring what mixed reality can bring for research, and we've learned a few lessons along the way. I wanted to share with you some of the lessons we learned, and tell you a bit more about where things are headed, especially for mixed reality on the web.

Mixed reality is a visualisation platform. We can bring existing workflows from the entertainment industry to data science and visual analytics. Real-time exploration of historic and new context is here, and the tools are becoming more accessible for modern developers and non-developers alike. For scientific visualisation, the added bonus is all the existing tools we can bring in to our research.

Mixed reality encourages physical exploration. There is something curious about being able to get up and close to your dataset, and look at it in ways you are naturally inclined to look. It is sometimes difficult to get the angle you want with a mouse and keyboard combo, and nothing is more natural than your own body movement and your own eyes.

Mixed reality will be delivered across the web. Just as movies and music already moved to streaming platforms, digital content will also benefit from moving to streaming content. For one, you no longer have to wait for a massive download to get things going. Your browser will download the things you can see first, and fill in objects as they become available.

aftertheflood-O3.gif

There are much more to see where technology is headed. It is not likely that in 5 years we will all have contact lenses that deliver mixed reality magic. But the devices are maturing, tools are shared and built upon, and web services are increasingly offered for others to create on. Right now, web technologies like After The Flood are good enough to deliver immersive experiences.

We have the people just as good, if not better than average, regardless of the obstacles we face in government, internet speeds, and hardware. No matter where you live, as long as you have access to tools to develop on mixed reality technology, you have the opportunity to shape the industry.

Device manufacturers are still pushing hard, at least in my experience, for early adopting developers. Mixed reality will not replace our existing reality, but it will serve as an enhancement for our lives, like Google Translate, with real-time text replacements. There's no app yet that we cannot live without, but many are quite handy in the right context.

I hope I have given you some glimpses of what we have worked on, where mixed reality is going in terms of existing technologies, and what may be coming in the future.