Digital Privacy: Ethical Practices in the Data Age

I had the privilege to host and moderate the digital privacy panel at Henley Club with Ivan Chua. Joining us on the panel were four privacy-conscious researchers, experts, and consultants: Dr Vanessa Teague, Ellen Broad, Gabor Szathmari, and Prof Dali Kaafar.

Many threads of digital privacy were had. I took away some key points as below:

  1. We need to be transparent about how we anonymise, encrypt & handle data. Publish the method ahead for critique.
     
  2. Data is a messy—'thing'—that everyone talks about, but has different meanings. For example, open access versus closed, specific access
     
  3. Consumers should be able to request from companies what they have on them. For that to be useful, data need to be in small digest forms
     
  4. Being transparent in data sharing / analysis practices also helps others to learn by example. You can also learn from public inspections.
     
  5. Ethical practice is a fluid thing. Everyone has a different reaction to the same way their data is shared.
     
  6. Pay attention to context: you can infer items about someone with small info, even when they haven't given you anything new about them.
     
  7. Differential privacy: the maths to maximise accuracy of data while minimising the chance of its records being re-identified.

Got tips on digital privacy? Feel free to reach out.

Mixed Reality and Scientific Visualisation

Mixed Reality and Scientific Visualisation

This post is adapted from a talk I gave at the C3DIS conference in July 2017. 

In Jason E. Powell's work, "Looking Into The Past", he captured over 200 old photographs overlayed on roughly the perspective where they were taken. His work took the internet by storm in 2013. This post was partly inspired by his work.

When interviewed by BBC, Powell explained that he was doing a project to rephotograph some of the shots from the Library of Congress. He held up the photo to line up the angle, and a moment of spark came to his mind—and took the shot holding the print. For particular work, he wrote about how he felt about the now-missing 1950 Leader Theatre:

This photograph breaks my heart.

Washington, DC used to have so much character. But we don't have anything like the set of buildings in this original photograph anymore. And, what's more, we never will again. And I missed it! This project is the only way I can ever see the way Washington, DC used to look, back when you could build four incredible buildings like these next to each other.

It makes me very sad. I want to step into this photograph and go join those folks in line. I want to eat at the Acropolis Cafe. I want to visit the Gayety. And yet, the only thing left from this scene is the red brick building on the left. You cannot convince me that Washington, DC is better off by having the modern blahbuilding taking up this space instead.

Through the lens of the camera, the streets were rephotographed for a historical context. I found this notion of rephotograph fascinating. Powell's work brought the past to the present, re-encoded physical space. 

Powell's work is, in my broad definition, remixing reality. This can be extended from past->present, to present->present, to future->present. Essentially, time is irrelevant, and what glues the two spaces together is the thriving narrative that makes meaning.

In this post I would like to examine mixed reality in a broad sense. I am intentionally blurring the boundary of what makes up mixed reality in order to piece together a bigger picture. More generally, I would like to discuss the nagging feeling when we first get into an alternative reality that feels unnatural to all of us.

From a typical marketing image of Teomirn, an application that teaches people to play the piano, we can dissect the atypical emotions provoked within. For instance, kids I know would probably be jumping onto the piano and trying to play along, disrupting the experience in the process. This photo of a family is just a tad unemotional.

Cognitive dissonance happens when we encounter the unexpected. We take some time to adjust, and recalibrate our expectations to match the experience. In Powell's work, overlaying the past to the present creates a cognitive dissonance, and provokes curiosity, and understanding. For Teomirn, the virtual player is overlayed, but the affect is not quite the same. 

Mixed reality applications are assistive by nature. Take Google Lens for example, it marries information retrieval (reverse image search on Google Maps) with the smartphone camera and display:

For the longest time, humans have learned how to create tools as an extension of themselves. It's how we took over the planet. When we think of the term "assistive technologies", we tend to associate it with aiding disability. Perhaps in the distant, utopian future, assistive technologies would become necessary enhancements. 

But would mixed reality replace reality? I previously wrote about the obstacles that are inherent with virtual reality, and I think the question is missing the most important point. We can't escape our reality. We're vendor-locked, constantly logged in, and are born into this environment. Humans are habitual creatures, and we might prefer the default reality after all.

Rather, I believe mixed reality should supplement reality. Just as other technologies—mobile browsers, for instance—have made our lives a little bit easier, they are not stand-in replacements.

The strengths of mixed reality are to connect our body sensory inputs and outputs, and in some cases enhance them with other tools, like machine learning. Mixed reality by itself is only a platform, an empty shell ready to be worn, a coat where buttons have logical rules, but not hard constraints. 

Mixed reality platforms come in many flavours. Here are some examples that are representative and popular, and they are by no means exhaustive. The dance below was by Marpi, with Hien Hyunh dancing:

In their performance, Marpi connects Hien's hand movements with two virtual geometries projected behind. The projected lights create an additional energy, and without it, the performance is not the same. Marpi and Hien's exaggerate what is perceived from the dance.

Another work by Hanley Weng marries Apple's CoreML, a machine learning library, with ARKit, an augmented reality framework, on the iPhone:

Weng feeds the camera to a machine that attempts to identify what it is. Upon recognition, the name is labelled in proximity of the object. The potential applications are high and broad, and very exciting.

In Fragments, a game made for the Microsoft HoloLens, the player acts as a detective trying to figure out where the murder scene has taken place. In the animation, we see the player inspecting a transit schedule on their desk. Their office has been transformed into a crime scene, reappropriated for the context at play.

Applying the concept of reappropriating a space to scientific visualisation, mixed reality platforms become immensely meaningful as a communication tool.

I bend the purpose of scientific visualisation from Wikipedia for my use: scientific visualisation aims to graphically illustrate scientific data to enable scientists, and non-scientists, to better understand and glean insight from their data.

In other words, mixed reality platforms can act as a knowledge translation, a bridge to reach the scientific insight that might be normally inaccessible. However, the change in frame of mind, perspectives, is difficult to measure. We still have yet to see what specific values mixed reality will bring to the table, but we're at the right time to explore it.

The above animation is a mockup I produced at the end of 2016. The map shows the Victoria-Tasmania domain in Australia, and the smoke is a crude signal of what happened in a major January 2016 Tasmanian bush fire. The smoke went all the way to Victoria over 10 days. Emergency calls were recorded to complain about the smoky air.

Fast-forwarding 6 months later, working one day a week I have gotten a bunch of integrations with historical data: wind vectors, smoke magnitudes across 16 elevations, satellite imagery, ground observations from weather stations, as well as playback controls. Thanks to the scientist Martin Cope who provided me with the data. 

A version not too dissimilar to the above prototype was brought to International Biomass Burning Initiative Workshop, Boulder, Colorado. People who tried the demo loved where it is going, and I was happy to hear their responses. It's not more than a self-pat on the back exercise, of course, but it was certainly encouraging for a solo developer working one day a week on this.

What I discovered through my personal experience was that mixed reality provides insight with mobility. It's one thing to display virtual content on a monitor, but nothing comes more natural for anyone than to walk around the model and check it out from up close, side ways, and at an angle. Mixed reality not only re-encodes our spaces, but also provides an avenue of discovery that befits the purpose of scientific visualisation.

Let's talk about some of the unresolved challenges in mixed reality. It is important to recognise that we are still in early stages of what's coming, and identify the gaps.

  1. High-end mixed reality still requires a prohibited cost of $3,000 or above. A set of Microsoft HoloLens will easily set you back that much, and if you were to invest in a tethered headset, you still need a powerful enough computer to run it. Oculus has announced a standalone headset for $200 in 2018, and I am confident that others will follow suit.
     
  2. Hardware design inherently constraints mixed reality platforms. One of the things new developers overlook is the chipset and memory limitations of a standalone—or, shall we say, console—headset development. For instance, the Microsoft HoloLens only has 640MB of VRAM, unlike most standard GPU cards which come with 3-6 GB VRAM today. Just as point 1), I expect the limitation will slowly be lifted to match laptop specs in 3-5 years.
     
  3. Mixed reality interfaces have different affordances, and their own limitations. A lot of companies are hung up on haptics feedback, putting invisible forces where a virtual object would not be able to exert. They are missing the actual problem: there is not a physical object there in the first place. Instead, we saw the rise of virtual reality like the HTC Vive, who put controllers in our hands. Mixed reality will need to go beyond visual overlays and blend into our physical world with sensors rather than actuators, which are far more flexible in use cases.

In summary, please allow me to indulge in another Jason E. Powell's rephotograph, this time in Wyoming.

Powell pulled punches on the drastic change over 150 years of landscape. He wrote under this photograph:

None of the trees were there in 1865, most of the gravestones seemed to still be there, although the graveyard was overrun, there’s a house that isn’t there any longer, but you can see the hint of the church steeple poking over the house in the original shot. That’s still there. The canal is completely dry and there’s a tunnel through the mountain. The bridge doesn’t exist anymore, either. All in all, I’d say there isn’t much left from 1865 Harper’s Ferry, honestly.

None of the mixed reality technology sophisticated as the ones we have today existed back in the 60's, when virtual reality research was beginning in its infancy. There are no more bulky trackers any longer, replaced by smaller and smaller components that are becoming more mass produced.

Perhaps 50 years from now, what will become the standard mixed reality platform will be so alien to us 50 years ago, without a trace. We see them only through remixing the past.

Advice for first-time hackathon attendees

Advice for first-time hackathon attendees

The following points are distilled from my experience in attending hackathons so far. I hope this will be useful to those who are new to the format.

  • What are some real problems you’d like to solve? For example, maybe transportation isn’t as good in some parts. Or maybe the roads are not as predictable.
  • Who are your target community? Assuming you’re making software projects, there are people who you should be keeping in mind when designing.
  • Context? When and where do you see people using this?
  • Expertise? What kind of domain knowledge do you need? Can you get buy-ins from potential stakeholders? What about technical background?
  • Brainstorm wide and as much as you can. Many teams will arrive at the same idea initially, but with enough iterations and external input, you’ll come up with unique needs and solutions.
  • Focus on one function and implement it really well. The rest can be placeholders, and doesn’t have to work with real world data. The important part here is to demonstrate the potential, and polishing takes a long time. So focus on one function.

Reality Mix-up: Science Applications

Reality Mix-up: Science Applications

This talk was originally given at Melbourne Augmented Reality Meetup, hosted by CSIRO Data61. I wrote up the transcript while my mind was fresh.

If you haven't been to Melbourne Museum lately, check out the CSIRAC, Australia's first computer in 1949. It was a significant investment in service over 15 years, used punch cards, and it could compute 1000 times the number of equations a single person could do in one day.

Punch cards were data, but easily replaceable: someone could (with patience) spot a mistake, replace a card, and feed it back into a machine. Once the data was verified and correct, it could be replicated and posted to other people, and they could run similar calculations.

Computer terminals was the next leap. You could type on a physical keyboard, and the computer would give you a response. This kind of instantaneous interactivity changed the way scientists work. What wasn't possible before was suddenly something that could be done in a much shorter period of time.

caves of qud

Computer scientists and programmers weren't the only people who embraced computers. Storytellers, filmmakers, and musicians alike went to learn programming, or collaborated with programmers. They created interactive experiences through digital media, and broke new grounds.

Today we live in a world where photorealistic experiences are coming at interactive rates. Computing power is continuing to rise, more affordable, and more efficient. It's not just automation and interactions, but also digitisation of processes and workflows.

Today we live in a world where motion capture technology from 20 years ago, which cost $100,000+, can be bought for less than $2,000 at home. Authoring and content creation tools like Unity are maturing, and there are many efforts in open source and commercial solutions building on each other. 

Improvisation became innovation. ReMoTe is such a synthesis where the remote worker's location is streamed back to an instructor, who can project stereoscopic overlays, like their hands through Kinect, back to the worker on site. It was cheaper to collaborate remotely than flying someone over. It was like a Skype call, but in 3D, and in front of you.

image 8.jpg

Zebedee is a hand-held LIDAR scanner (you'd still lug a trolley behind you) that can digitise an entire location by simply walking around it. The laser spinner continues to record the point cloud, and the spring head gives it plenty of resolution and reach.

I want to explore what mixed reality can bring to the table for scientists and researchers. The work above was done by Eleanor McMurtry, one of my students last summer. Given the data, we can quickly tell where is the front door, the halls, and leave each other a note. 3D data is much more intuitive than floor plans, and don't rely on existing knowledge of the place.

image (1).png

My colleagues Matt Adcock and Stuart Anderson set this insect photo station up at Data61. They mounted it on a spinning disc, and automated the whole picturing process. The result was 100+ photos for a single insect used to reconstruct it in 3D, preserving the fragile sample for future use.

The result was that many insects can now be viewed up close, in the angle natural to the interested scientist. They no longer have to be careful about using the sample. Better yet—the process of measuring the length of the limbs and sizes of the hulks could also be automated, saving even more time down the track.

CluckAR, developed by Choice Australia, is a consumer-facing mobile application. It works for all Australian egg providers found at supermarkets, and it will show you how free range the chickens really are. 

We are still exploring what mixed reality can bring for research, and we've learned a few lessons along the way. I wanted to share with you some of the lessons we learned, and tell you a bit more about where things are headed, especially for mixed reality on the web.

Mixed reality is a visualisation platform. We can bring existing workflows from the entertainment industry to data science and visual analytics. Real-time exploration of historic and new context is here, and the tools are becoming more accessible for modern developers and non-developers alike. For scientific visualisation, the added bonus is all the existing tools we can bring in to our research.

Mixed reality encourages physical exploration. There is something curious about being able to get up and close to your dataset, and look at it in ways you are naturally inclined to look. It is sometimes difficult to get the angle you want with a mouse and keyboard combo, and nothing is more natural than your own body movement and your own eyes.

Mixed reality will be delivered across the web. Just as movies and music already moved to streaming platforms, digital content will also benefit from moving to streaming content. For one, you no longer have to wait for a massive download to get things going. Your browser will download the things you can see first, and fill in objects as they become available.

aftertheflood-O3.gif

There are much more to see where technology is headed. It is not likely that in 5 years we will all have contact lenses that deliver mixed reality magic. But the devices are maturing, tools are shared and built upon, and web services are increasingly offered for others to create on. Right now, web technologies like After The Flood are good enough to deliver immersive experiences.

We have the people just as good, if not better than average, regardless of the obstacles we face in government, internet speeds, and hardware. No matter where you live, as long as you have access to tools to develop on mixed reality technology, you have the opportunity to shape the industry.

Device manufacturers are still pushing hard, at least in my experience, for early adopting developers. Mixed reality will not replace our existing reality, but it will serve as an enhancement for our lives, like Google Translate, with real-time text replacements. There's no app yet that we cannot live without, but many are quite handy in the right context.

I hope I have given you some glimpses of what we have worked on, where mixed reality is going in terms of existing technologies, and what may be coming in the future.

Guide to WebGL 2.0

View the presentation material here (feel free to skip signing in).

The next generation of computer graphics on the web, an open standard known as WebGL 2.0, has landed in Firefox, Chrome, and Opera, with other browsers fast approaching. This enables an additional layer of GPU-accelerated content. 

Our browsers are already GPU-accelerated; CSS3 Transform is one well-loved example. As designers, developers and creatives, we're empowered by the community that we work with, and the community creates tools to empower our work. So what does WebGL 2.0 mean for the web community?

In this talk, we will discuss the theory of modern design of the graphics rendering pipeline using WebGL as a framework, and go through example code to concretise the theory. Then, we will look at how ThreeJS uses WebGL 2.0, as well as new tools that might change the face of the 3D web for years to come.

This presentation was originally given at Computer Graphics on the Web in Melbourne. Video kindly provided by Common Code.

Maps Visualisation, Web Accessibility, and the Future of the 3D Web

Maps Visualisation, Web Accessibility, and the Future of the 3D Web

I originally wrote this email response to an internal query, but I thought to post it publicly so many may benefit. Please leave a comment and share your thoughts with me. The article has been modified with minor revisions for the public audience.

Maps Visualisation

Check out Mapzen Vector Tile Service  which can be queried as GeoJSON, and transformed into other vector formats that OpenLayers, D3.js, etc. can understand. There are also increasingly more and more vector tile containers, for instance Mapbox, Mapzen, Terria, Cesium, and Leaflet has a plugin

Just like the recent announcement of deck.gl v4.0 being released, these mapping services are native to the web, accelerated by WebGL. The amount of visualisation power through computation has a high potential.

Accessibility for the Web

There are borderline interaction papers, but some interesting ones may be what came out of the W4A Accessibility Hack. The delegate winner, Kieran Mesquita, created a Docker container for TAO (Sam would be proud). Normally, it takes 1-2 hours of setting TAO up just to get it working, but now you can do it in 15 minutes. 

TAO is an open source program that can assess WCAG accessibility conformance, and as all government agency websites have strict accessibility requirements, this will be real handy down the line.

Software Engineering

Terry Crowley wrote an excellent article called Education of a Programmer  and refers to Joel Spolsky’s “Law of Leaky Abstractions”, and managing software complexity. Both are long reads but totally worth the time.

Frederick Brooks Jr. wrote a journal article titled No Silver Bullet—Essence and Accident in Sofware Engineering in 1995. He listed the essences (fundamentals to creating software) and accidents (side effects of using software), and talked about how different solutions may only solve a small set of a general problem, and he defines them well. The 22 years old article stands well against the test of time, I think.

The Future of 3D/VR on the Web

Here are the factors to consider:

  • The web is fundamentally a publishing platform. The strongest strength it has is to deliver content across the globe (and eventually, in space, no doubt). For 3D/VR, the ability to streaming content is continuing to improve.
     
  • WebGL is the principle driver of web-based graphics rendering API. Version 2.0 landed last month in 3 major desktop browsers.
     
  • WebVR is being standardised by W3C (360 videos) and Khronos (OpenXR). Plus the open source effort (OpenVR) in making peripherals available to any desktop machine, I think we’ll see many devices becoming more available as graphics chips get more energy-efficient.
     
  • The leading company in this effort is AltspaceVR, who has seen 1,000+ concurrent users celebrating sports events and presidential campaigns. What is interesting about their technology is that they are built on A-Frame, an HTML-extension library, which means any of their virtual objects can be annotated and fit accessibility requirements. I’m super excited to see what they are doing to translate visuals to text, sound, or any other sensory format.
     
  • It is important to remember that Mixed Reality will continue to grow. This can be a virtual overlay through a transparent, or opaque video feed. 
     
  • We will see a few major paradigm shifts in the coming years:
  1. Smartphones will continue to overtake desktops / laptops. Therefore development efforts will continue to spill over.
     
  2. Mouse and keybaord? Touch screens? Accelerometers? Remote controllers? There will be a standard interaction paradigm across these different inputs.
     
  3. Authoring 3D content will no longer be constrained to the expert users of Maya, Blender, and the like, although these software will continue to be the industry standard for high quality content. TiltBrush, Medium, and SkulptGL are the early drivers of low-level entry content creation. 

What does it mean to browse a 3D/VR website vs a normal flatscreen website? Does that mean people will start strapping phones to their faces? I doubt it. 

People will start using phones in ways we haven’t designed the phones for ever before, because there will be incentives to repurpose a WebVR/WebAR enabled phone to do specific tasks. If there is a need to be filled, be it attending a social event  care for animal welfare or sharing digital worlds they have already begun. 

The next big adjacent idea, borrowing the term from James McQuivey, will be contributing to the 3D Web ecosystem, while being fit for purpose. It could be tools creation, enhancing rendering pipeline, or a platform for publishing. Whatever it is, we’re at the right moment for it.

WWW 2017

I was invited to be on a mixed reality panel at WWW2017 with Mark PesceViveka WeileyStuart Anderson, and Kate Raynes-Goldie. The panel was well received and had a good turnout, following Mark Pesce’s keynote on Mixed Reality Service that morning.

As part of the conference, I attended the Trust Factory, and learned a lot about governance, policy, and research on trust and security on the web. The most notable one was probably W3C Verifiable Claim, a third-party certifiable process that prove some information to be genuine. I also learned about distributed ledgers, and by the way, did you know that Amazon checkouts do not go through a centralised choke point, and they have to deal with possible backorders?

I also attended a workshop held by Rossano Schifanella (who recommended me to try out Mapzen, good idea), Bart Thomee, and David Shamma. (They are all ex-Yahoo! engineers who went separate ways.) They delivered a crash course on geography/cartography literature, geospatial analytics, and visaulisation. I also met Martin Tomko at the workshop, who is doing very interesting work at Unimelb, and hope to catch coffee with him one day.

For the rest of the conference I saw various industry and research talks. Some interesting ones include setting up an internet service for people on Mars (fun fact: it takes 8–48 minutes to relay), and Gmail uses inbox history to predict new emails and filter spams. You can see the entire WWW 2017 proceedings here.

Last but not the least, the day before our panel, Nature published a climate forcing paper discussing a strong correlation between solar intensity and CO2 concentration in our atmosphere. We decided to do a 3-hour hackathon to produce a visualisation of the paper to explain the effect by visuals, all done in WebGL. Hope you like it.

Direction 2016

Direction 2016

Above: Jacob Bijani & Pasquale D'Silva dressed up as Dracula, talking about animation and game development at Direction 2016.

This was my 7th year in attending Web Directions, now known as Direction. A two-day conference (plus masterclasses) with inspiring design talks and passionate attendees. It's my personal favourite and how my career was launched in Sydney.

As I did in the past, here is a series of speaker and MC portraits I took during the conference. Missed it? Don't worry, Ben Buchanan compiled a large list of notes for you. The ending keynote is also available as a post-speech transcript with slides, arranged by the amazing Maciej Ceglowski.

Special thanks to all the attendees, speakers, volunteers, venue and conference organisers who made this event possible and awe-striking this year.

CityLab Melbourne Design Sprint

I was invited to be part of CityLab's design sprint in October 2016 to brainstorm and prototype a community engagement software solution with a group of 10.

Bringing together City of Melbourne employees with experts from creative agencies and the technology sector, we're moving at pace—from concept to prototype in 3.5 days—to test and trial new ideas and city services for the community.

Interaction Design for Education Designers

Interaction Design for Education Designers

Listen to the talk or download the MP3:

You can apply design principles in your work, too.

I gave a talk at the Learning Analytics and Education Data Seminar, University of New South Wales. Half or more people in the audience were educators and instructional designers in academia. While this talk was structured for them, the principles are applicable for other fields.

In this talk, "Design as Invitation to Interaction", I presented three barriers to designed objects, and used case studies as well as examples of good and bad design to show how they can be over come. They are:

  1. Mistimed, misplaced, misused
  2. It's dangerous to go alone!
  3. Technology is creepy

I referenced several video productions and documents I was involved during the talk. You can follow along:

You can find a copy of my slides (PDF), and slides below.

Moving Towards D3 v4

D3.js is a popular data visualisation framework for Javascript and the web. In July 2016, Mike Bostock released the new version, v4. To celebrate the new release, I gave this talk at the CSIRO data science webinar. The content covers: what is D3? Why do people use data visualisation? What is it for? I show examples of recent works, and point to a couple of resources. Then, D3's enter and exit pattern is briefly outlined. I also talk about the modularisation of D3 v4, as well as a couple of libraries that go with it.