Technologies designed and built that underpin ACMI’s new experiences

Simon Loffler, ACMI, Australia, Seb Chan, ACMI, Australia

Abstract

ACMI, the Australian Centre for the Moving Image, is the most visited museum of the moving image in the world. In 2019, we closed our doors to reshape our Federation Square building in order to become more public-facing and to house a major new permanent exhibition, The Story of the Moving Image. As you might imagine, we have a lot of moving images to show, and a lot of fascinating objects to tell people about, all of which can be overwhelming to some audiences. That's why we designed and built a system called The Lens. Every visitor to the museum can pick up a Lens, which they use to collect objects and media to watch and explore in their own time. The Lens depends on a network of hundreds of Raspberry Pi devices to display media and interact with visitors, all running open-sourced Python code. All these devices need to be robust and maintainable in order to survive the 10-year lifespan of the exhibition. In this paper, we'll give you a tour of the technology at ACMI, including our Internet-of-Things fleet and management tools, and XOS, the eXperience Operating System, which provides content and configuration to the devices.

Keywords: internet of things, python, raspberry pis, exhibition technology, operating system

Abstract 

Many museum collections are hidden in private physical and digital archives, unable to be accessed by the public. Over a five-year renewal project, ACMI set out to open its own collection online in a creative way, allowing visitors to not only collect an object on display in the physical public gallery, but also other objects from the ACMI collection that connect to it in a significant way. 

This paper will discuss the technical decisions we made to remove many of the barriers for visitors to use ACMI’s new Lens, and the analytics we gained from the first million object collections. 

Keywords: nfc, infrastructure, api, exhibition, devices, collection, user experience, digital transformation 

Introduction 

ACMI’s technical focus for the renewal project was to remove as many barriers as possible for our visitors to collect objects from the ACMI collection and take them home with them so that they could explore them in depth from the comfort of their homes. 

To achieve this ambitious goal, we had to remove most of the technology requirements from the visitor, and onto the physical museum infrastructure. This meant making early architectural decisions around a museum devices strategy, and choosing a fast, flexible, and reliable remote development and deployment strategy to match. 

In this paper we outline the technology choices we made, how the pieces fit together, how we develop, deploy, update, and monitor all those pieces, and the analytics we collected from the first million objects our visitors collected.  

We close with some lessons we learnt along the way, and what is next on our technology roadmap. 

Background 

Smart hires by our majority femalerun, state-funded museum enabled the swift technological transformation of ACMI. Our ambitious, compassionate CEO Katrina Sedgwick’s vision to put First Nations people at the heart of the renewal attracted equally progressive, compassionate, and driven project staff members, enabling creative and often heated decision making to push us forward quickly and realise our shared goals. 

We placed the visitor first in every technical decision we made, verifying each step of our prototyping process with a group of museum visitors. 

While this process resulted in creating and re-creating a substantial part of early prototype software and hardware, it led to a stable foundation to build from as the project progressed. 

Throughout the technical and architectural co-design process we opted to follow industry best-practices for our management style, software, hardware, and infrastructure. This added a lot more time early in the project when we had up to five developers working on the codebase creating good structures and automated rules but enabled our products to be maintainable by the two core ACMI software developers after the renewal project had finished. 

Requirements 

Adopting an agile management and development strategy to a creative technical solution meant that requirements were often changing early in the project. This instability was felt across the organisation, our management reassuring us to accept and live with the process as best we could until we’d completed enough design cycles to determine our feature set to build for opening. 

Our base requirements included building a museum experience operating system that could enable remote control of a swarm of museum devices. This operating system would expose APIs for use by these devices and other interconnected systems. It would import data from external sources and APIs, normalising that data for use in our museum and on our public website. 

We wanted to use a single programming language (Python) across all projects and adopt the use of open-source software and hardware where possible. 

We aimed to purchase all technology from off-the-shelf vendors, so that replacement parts would be readily available across the 10-year lifecycle of the capital works renewal project. 

Design goals 

The design goals of our Experience, Product & Digital team across the renewal project were: 

  • Visitors have zero technology requirements. This removes many barriers to entry and aids in accessibility of our museum to all visitors. 
  • Open the ACMI collection to the public. Allowing our visitors to explore our entire collection, both during their visit, and at home after their visit. 
  • Automated deployments. Allowing our developers to focus on optimising our software for the visitor experience. 
  • Open-source our software and hardware. Allowing other museums, galleries, and the public to share our technical solutions and build products with us. 
  • Enable future data-led business decisions. The analytics from our anonymous data should enable the organisation to make better business decisions. 

Architecture 

The internet provided us with the inspiration for most of our architectural decisions. At the heart of our infrastructure sits a monolithic Django web-application, our experience operating system, XOS. 

XOS imports data from several sources, normalising it for our distributed network of museum devices. 

Museum devices that need to present information to visitors run their own small webservers and internet browsers. Interactions that our visitors have with these devices are sent back to XOS via its APIs. 

ACMI infrastructure diagram: 

ACMI infrastructure diagram showing how XOS interacts with our technology.
ACMI infrastructure diagram showing how XOS interacts with our technology.

All this infrastructure makes it possible for our visitors to interact with a single device, the Lens. 

The ACMI Lens tapping on a Lens Reader embedded in a museum label
The ACMI Lens tapping on a Lens Reader embedded in a museum label.

The Lens 

What is it?

Technically, it’s a recycled cardboard disk in the shape of a Viewmaster reel, with an NFC (Near Field Communication) NTAG213 chip and antenna sticker on the back of it. 

The chip gets powered up by the NFC radio waves produced by our Lens Readers, so doesn’t need any battery of its own. The radio waves make the Lens come to life up to about 2cm away from the surface of the readers, within an area of roughly 4cm diameter. 

This means the entire process to collect an ACMI object from the visitor’s perspective is 

  • pick up a Lens 
  • tap the glowing light of a Lens Reader 

It’s simple to understand and convey to large crowds of people and builds on the actions we have all previously learnt when we tap our Metro or credit cards. 

Why not an app? 

During our prototyping and development we explored using a mobile device to act as the Lens, purchasing NFC reader technology that would future-proof us to build that ability if it were attractive enough to visitors. 

We explored not only Android/iOS applications that reacted to being close to our NFC readers, but also adding a virtual Lens to Google/Apple Wallets, as you would an airline ticket or membership card. 

From initial research and early visitor testing it became clear how many extra barriers there were to get a visitor to use their own mobile device. Technical reasons like a lack of storage space, battery life, or data plan to even get the app onto the device. As well as social barriers such as a preference for anonymity, and a desire to focus on being present with both friends and the museum experience itself. 

It was also interesting to consider the role that a tangible artefact plays in strengthening the memory of a museum visit (Ciolfi/McLoughlin, 2019). (1) 

Lens early designs and production

The Lens has many precedents in the museum field, and the concept of an NFC or RFID smart card is not new. The Lens emerged from very early sessions with ACMI staff and Tellart, a Dutch/US design firm, and David Hebblethwaite at Art of Fact, a NZ-based museum masterplanner way back at the end of 2015. Back then the idea was to use a “story card”, integrated into a museum ticket, which would allow visitors to collect, remix, and watch content both in the museum and later at home.  

Drawing on CXO Seb Chan’s previous experience leading the Cooper Hewitt Smithsonian Design Museum’s redevelopment (2011-2015) and the development of the Pen in partnership with Local Projects and Sistelnetworks, the early work on what was to become The Lens was always predicated on a device that functioned as a take-home souvenir. As Chan & Cope point out, this was the original intention at Cooper Hewitt but a mix of cost and building-risk reasons took the Pen down a visitor-loan path instead in which visitors borrowed an “active” Pen (NFC reader inside the pen) to scan NFC tags attached to museum labels. (Chan/Cope, 2015) 

Throughout 2016 and 2017, ACMI continued to explore different concepts for The Lens and in ACMI’s 2018 touring exhibition Wonderland. We worked with Australian interactive design firm Sandpit to integrate NFC tags into a paper map that was a key part of the interactive experiences in the exhibition. The engagement rate of the map in the Wonderland post-visit website was enough to convince ACMI to follow that rabbit hole with the Lens. (Chan/Paterson, 2019) (3) Wonderland subsequently toured to Te Papa in New Zealand and Art Science Museum in Singapore with other global venues potentially to follow post COVID. 

The appointment of Second Story as exhibition and experience designers for the ACMI Renewal accelerated development of the Lens resulting in a disc-form factor and the exploration of different transparent and translucent forms. In the production phase of the project, ACMI worked with Swinburne University’s Centre for Design Innovation to take the designs of Second Story and re-engineer them to be recyclable and reduce their environmental impact as a mass produced, take home, free device. 

An assembled ACMI Lens Reader sitting on a desk
An assembled ACMI Lens Reader

Lens Readers 

What are they?

Our open-source Lens Readers act as distributed nodes across the museum that power up the Lens, and also transfer the Lens’ anonymised short code to our experience operating system, XOS. 

While designing and prototyping the hardware our focus areas were primarily on speed and ease of use from a visitor’s perspective, but secondly reliability and replaceability from our ACMI AV team’s perspective. 

The three main components of our Lens Readers are: 

  • NFC reader 
  • RGB LED lights for visitor feedback 
  • Micro-computer to send the Tap data across the network 

For each of these components we had a preference for off-the-shelf hardware that would plug into a range of micro-computers of both ARM and x86 architectures. They also had to be programmable by open-source software so that we could freely modify the code to optimise it for our visitor’s experience. 

The benefit of this choice made replacement as simple as unscrewing, unplugging and replacing one single component which could be performed by the majority of our ACMI staff, rather than having to send the entire device to be de-soldered and replaced by a highly technical team. 

Our final hardware selection was: 

Our final software selection was: 

  • BalenaOS for remote operating system updates and software deployment (both ARM and x86) 
  • Python and CircuitPython for Lens reading and LED control 

Device strategy 

Why Raspberry Pis?

By removing as many of the technical barriers for visitors we pushed all the technical problems to solve onto our museum infrastructure. This created a requirement for hundreds of small computers to be running throughout the museum. 

Not only did this generate a big cost problem, but also placed a huge emphasis on the devices to be easy to develop on, maintain, purchase, and replace. 

Our previous iteration of the museum contained a lot of Brightsign video players. We were very keen on developing on their platform, but quickly found the development experience of non-standard SSH, and not-quite-JavaScript Brightscript a big barrier. 

Our internal technology team were hesitant of our choice to experiment widely with Raspberry Pis because of their (somewhat) unfair labelling as just a hobby device that corrupts SD cards without being carefully shutdown. 

This led us to build a demonstration desk in the initial stages of development that had prototype code running on samples of each of the devices we were planning to deploy to the gallery. Fortnightly staff demonstration presentations helped uncover many of the bugs in our code and allowed us to test these devices for their uptime and failure rates. 

We were pleasantly surprised when we benchmarked Raspberry Pis against an Intel NUC and a Dell Optiplex, finding that they performed perfectly well for all tasks apart from 4K video playback and some high-frame rate interaction design. 

Our failure rates were low – two devices across two years, one of which may have been from user error, shorting out the pins against a metal frame. 

One of the biggest development time sinks was getting a window manager to open gracefully without toolbars or other desktop popups inside a Docker container on the cloud deployment operating system we chose with VLC (our media player framework) and Chromium (our digital label display browser). But once solved this allowed us to have the same codebase deploy across both Raspberry Pi ARM devices, and larger x86 devices. 

In the middle of our prototyping phase, the Raspberry Pi 4 was released, which meant we got a free performance upgrade, as well as the ability to test how future-proofed our codebase was against new hardware. 

We only had to change one line in our Docker file to point at a 64-bit image, and our code ran as expected, confirming that the Raspberry Pi made a compelling choice. 

In the end we were able to power the following devices with Raspberry Pis: 

How do we deploy them?

With over 350 devices installed in some difficult to access places, our focus was on ease of remote deployment and updating. This focus led us to a cloud deployment operating system by Balena. 

Balena comes in two flavours, BalenaCloud with a nice UI layer, or a self-hosted open-source option OpenBalena. Because we were developing several distinct products and would have a reduced software development team after opening, we opted for BalenaCloud. 

This allowed us to deploy/replace a device with the following steps: 

  • Burn BalenaOS to an SD card (ARM) or USB stick (x86) 
  • Insert the SD card 
  • Plug in ethernet and power on the device 

There is no post-install setup on the device itself. It automatically downloads the Docker containers with the software it needs to run and starts running with default settings. 

The device shows up in XOS via the Balena Cloud API. The last step for a curator or AV team member is to set the device configuration (museum object to be collected when someone taps on a Lens Reader, or Playlist of videos that a Media Player plays).  

This is done in XOS and sent via the Balena Cloud API as an environment variable to the device. The device automatically restarts the Docker containers that use that environment variable, and in about 10 seconds it is up and running again. 

How do we update the device’s software?

When our creative technologists want to build a new feature and deploy the new software to our Experimental, Staging, and Production devices, this is the process they follow: 

  • Clone the project from our GitHub repository 
  • Make a branch with the code additions 
  • Linting and testing runs via GitHub Actions 
  • When the Action passes, it pushes the branch/single Docker container to the Balena Experimental application 
  • A Pull Request is created for others to test that code by moving a device of their own to that Experimental application in the Balena Cloud UI 
  • Once that branch is merged to main, the GitHub Action builds a multi-container with monitoring to the Staging application 
  • The last step to roll out the update to the Production devices in the gallery is to manually run: balena push application_name 

It is a similar workflow to pushing updates to XOS and our public website, so the barrier to making changes is quite low. The Balena build and deploy process time depends on the size and complexity of our Docker applications, but it ranges from 2-5 minutes. 

How do we monitor them?

To monitor our few hundred devices, we use Nodel and Prometheus. 

  • Nodel – real-time health checks, power and rebooting controls 
  • Prometheus – historical data, including temperature, memory/CPU use, and playback states of media players 
A screenshot of the ACMI Nodel dashboard, showing the health of devices in our museum.
The ACMI Nodel dashboard, showing the health of our museum devices.
A screenshot of the ACMI Prometheus/Grafana dashboard showing time-series data from our museum devices.
The ACMI Grafana dashboard shows time-series data from our museum devices.

Our AV and Visitor Experience(VX) staff use Nodel to monitor and fix problems that arise. 

Our software developers use Prometheus to help debug code and guide our software to being more efficient. 

The link to these two services is XOS which exposes an API with all the device data, including tags for the type of monitoring each device has. E.g., monitoring its climate, memory/CPU use, or the state of playback of a media player. 

Nodel consumes this API and monitors the health via pings to these devices, allowing them to be powered on and off via a calendar service, or manually rebooted if necessary. 

Prometheus consumes this API to build up a configuration of known devices. The Prometheus data is then graphed using Grafana, allowing us to produce time-based visualisations of exactly what state these devices are in across the museum. 

Prometheus Alertmanager allows us to send alerts to Slack based on limits we set for our data. These alerts help us correlate any errors in our code with device health charts to help debug the causes of failures and prevent them in the future. 

XOS and our devices all send their exception logs to Sentry, a cloud service that allows failure events to be collected, matched with Github releases, and grouped into errors to link to Trello cards or GitHub issues. 

Our failure workflow looks like this: 

  • VX staff notice a failure and try rebooting the device via Nodel, reporting on Slack if the failure continues 
  • Developers get a Slack notification of the device failure from Prometheus 
  • Developers check Sentry & Grafana for patterns leading to the failure 
  • Trello cards are created with user stories of the failure including information from our VX staff 
  • Cards turn into a GitHub issue if software development is needed 
  • Code passes our review process and acceptance testing on our staging infrastructure 
  • A new release is pushed, and relevant VX staff are notified on Slack to test the fix 

We also specify that any outside collaborators integrate their devices with Prometheus and Sentry. This allows us to see the entire data flow, from Vernon to XOS to devices to Lens taps, which has helped immensely in debugging exactly where the bugs have existed. 

A screenshot of the homepage of XOS, ACMI's museum experience operating system.

XOS 

What is it?

XOS is ACMI’s experience operating system, whose primary focus is to enable: 

Curators to: 

  • Import and edit digital object labels and descriptions from our collections database, and external services like TMDB, IGDB 
  • Set Lens Readers to collect those digital object labels 
  • Set the images and descriptions of interactive digital labels 
  • Set playlists of videos to be played by media players 
  • Set the digital label to be collected when a specific video in a playlist is being played 
  • Build constellations of museum objects for visitors to explore at the museum and at home 
  • View analytics data to help guide future exhibition decisions. 

Rights and lending team to: 

  • Record information about ACMI’s permission to exhibit borrowed content and objects 
  • Export up-to-date rights reports 

AV team members to: 

  • Upload and transcode videos into the right format for playing in the museum 
  • Add subtitles to videos 
  • Set device configurations for devices, such as custom media player display outputs 
  • Update device configurations for failed devices 
  • Monitor device health 

VX staff to: 

  • Debug broken Lenses 

Management/executive members to: 

  • Watch analytics from the museum appear in real-time 
  • Print daily reports of dwell times, most collected objects and most created visitor experiences 
A screenshot showing the ACMI XOS device management admin page.
The XOS museum devices admin page.

Technical specifications

XOS is written in Python so we can use a single programming language across all our repositories. It makes use of the following frameworks and services: 

  • Django – for the admin interface 
  • Django Rest Framework – for XOS APIs 
  • RabbitMQfor sending and receiving live data messages. E.g., media player and digital label syncing 
  • Celeryfor background tasks like data imports, transcoding, and configuration syncing from Balena 

XOS also acts as middleware for the following services: 

  • Vernon – import data from ACMI’s Collection database 
  • TMDB/IGDB – import Film/TV/Videogame data 
  • Balena – device configuration syncing 
  • Prometheus/Grafanadevice monitoring, locations and tagging 
  • EBMS – calendar syncing 

How do we deploy it?

We run XOS on Azure servers and use the following services to deploy our infrastructure: 

  • Terraform – to store our infrastructure state in code 
  • Kubernetes – to deploy our Docker containers onto our infrastructure 
  • Docker – to build the XOS containers to deploy 
  • Github Actions – to test, lint, and trigger a deployment to our Staging and Production clusters 

Our Terraform scripts tell our Kubernetes infrastructure to scale up and down depending on load, to minimum and maximum resource limits which stay within our monthly budget for running our servers. 

Migrating away from Windows Server VMs to Terraform based infrastructure-as-code took a huge amount of work. We co-designed the initial build with consultants from ThoughtWorks, pair-programming a lot of the initial code so we both absorbed the knowledge of this system. 

There were many months of teething problems, and DevOps continues to consume many hours of development time. 

But now that we have a stable infrastructure, Terraform allows us to: 

  • Make infrastructure changes that are reflected in our GitHub repositories, making it easy for multiple people across multiple teams to be able to update our infrastructure 
  • Spin up/down new clusters for testing experimental infrastructure quickly and repeatably 
  • Perform disaster recovery drills regularly so we can be sure we can recover from failures quickly 

Kubernetes allows us to: 

  • Have our CI/CD pipeline deploy our main branch to staging 
  • Have our production builds be triggered by tagged GitHub releases with close to zero downtime 
  • Manually deploy specific GitHub commits to staging to test pre-pull-request features 
  • Use kubectl to explore our infrastructure status, and debug the environment 

We chose to use Docker containers for both our local development and deployment of XOS so that we could be sure what we were seeing on our developer laptops while building new features would be as close as possible to what we’d see when it ran on our infrastructure. 

This was a dream to work with in the early stages when we had a big team of developers, removing the need for each of us to handle our own dependencies and services. But as the project has grown and with it our Docker files, the speed of recompiling and reloading has become at times unworkable. 

So even the dream of one environment to rule them all still has its own growth debt that needs to be paid regularly. 

A screenshot of the ACMI post-visit website, showing your personal museum visit and everything you collected.
The ACMI post-visit website allows you to explore and share your personal visit to ACMI.

Post-visit 

What is it?

The ACMI post-visit website is where all the objects our visitors collected on their Lens are presented to them, helping to bridge the divide between the in-museum and online experience. (Patten, 2013) (4) 

On the back of every Lens is a unique six-character code used to login to the post-visit experience. 

Once logged in using a Magic Link method, the ACMI website becomes a diary of their own personal journey throughout the museum, linking the objects they collected to videos from the ACMI collection, essays and related objects, films, tv shows, and videogames. 

Constellations of objects can be explored in a similar way to our in-gallery Constellation Tables, stepping through the connections between objects carefully curated by our ACMI team. 

These visits, connections, and visitor generated experiences can all be shared with friends, to inspire a future generation of the wonder and beauty of film and videogames. 

The post-visit experience was built by our partners Liquorice, using a very similar stack to XOS to allow maintenance and future features to be easy to add. Liquorice used: 

  • Django – for the backend admin interface 
  • Wagtail – for the curatorial/marketing team’s essay writing backend 
  • Vue.js – for post-visit website to enable beautiful layouts and transitions 
  • Tailwind CSS – for a better developer experience styling the pages 

How do we deploy it?

The post-visit website is deployed onto ACMI infrastructure in the same way we deploy XOS. The infrastructure is written in Terraform to scale up and down automatically depending on traffic, with the codebase stored in ACMI’s GitHub repository. 

Liquorice and ACMI collaborated closely together to help guide and build the post-visit website and XOS API in parallel, so we could ensure an efficient path for ACMI’s collection data to flow from Vernon > XOS > ACMI website. 

Data pipelines, sources of truth

In an ideal world ACMI would love to have a single source of truth for all data. We got close to this ideal but have a few different places for different data types. 

  • Vernon – source of truth for collection data 
  • XOS – source of truth for device configuration data 
  • ACMI website – source of truth for curator essays 

This split enables our AV team to have a nice experience entering device configuration data, and our curatorial team a nice experience entering essays with social media embeds and videos. 

For longevity it may be that we roll device configuration and essays back into Vernon in the future, but for now this split makes sense to us. 

Our data pipeline now looks like: 

  • Vernon – object data is entered 
  • XOS – object data is imported from Vernon, re-shaped, and exposed as APIs for devices and the website 
  • ACMI website – object and Lens data is imported from XOS, with essays added 
A screenshot showing the ACMI XOS Analytics dashboard, where anonymous visitor data is shown including dwell time, average taps per Lens, and popular objects collected.
The XOS Analytics dashboard showing anonymous visitor statistics.

Analytics 

What are we measuring?

Our first iteration of an XOS analytics dashboard focuses on the data our board members and curators are most interested in: 

  • Average dwell time (first Lens tap to last Lens tap) 
  • Number of tickets booked vs number of Lenses used 
  • Number of Lenses that tapped 1 object 
  • Number of Lenses that tapped more than 1 object 
  • Number of total unique Lens taps 
  • Average unique taps per Lens 
  • Top 10 objects tapped in the museum 
  • Top interactive experiences created by visitors 
  • Member cards used as Lenses 
  • Total Lenses activated 
  • Total unique post-visit magic-link emails delivered 
  • Trends for all of the above data over the last period selected 
  • Graph of this week versus last week’s total Lens taps 

How are we planning on using the data?

In early April 2021, ACMI visitors reached 1,000,000 unique Lens taps, so we have enough data to start looking into favourite hot spots in the museum, as well as least popular areas. This will give us some areas to run visitor experience research for the curators to validate whether objects need replacing or moving. We can also match our popularareas data against visitor flow stories from our floor staff to help us shape future exhibition designs.

A photo of a visitor using the ACMI Constellation Table, exploring how the objects they collected during their visit relate to others in the ACMI Collection.
The ACMI Constellation Tables were built by Grumpy Sailor, using XOS APIs.

Internal vs third-party 

What did we build?

In our two-year renewal ACMI staff worked closely with many external contractors. 

The overall exhibition and experience design was developed by US firm Second Story, now part of Sapient/Razorfish (2018-2021) This design work built upon a masterplan developed by ACMI and NZ museum masterplanners Art of Fact (2015-2017). 

After the design phases, the production work was undertaken by Australian firms working with ACMI. 

Third-party software and services we used

What we learnt 

Spreadsheet errors

One of the most difficult problems we faced in the lead-up to launch was the discovery of a truncated spreadsheet column given to us by our Lens NFC tag manufacturer. 

It’s an easy mistake to make, NFC UIDs have leading zeros, and if copied and pasted into an Excel spreadsheet where the column data format isn’t text, Excel will kindly interpret them as numbers, drop the leading zero, and round them off. Because we had a spreadsheet of a few hundred thousand entries, it wasn’t obvious until a few pages into the sheet that there was a problem, so it was easily missed. 

We attempted to recover some of the truncated data by adding the leading zero and attempting to reverse the rounding as best we could, but the result was ~1% bad Lens UIDs. The worst part: they were randomly distributed throughout the boxes of Lenses delivered to us. 

We’ve handled this in software via XOS Lens UID validation rules, and purple LED light error responses presented to the visitor on a tap, but we apologise if you visit ACMI and find a broken Lens. 

DevOps for museums

Our team wanted to version control as much of our software as possible, which meant adopting Terraform at a very early stage of its development. This led to a lot of hours burned updating syntax and adopting new best practices as Terraform matured and our infrastructure grew. 

We also lost a lot of time becoming good-enough configuration admins of services like Elasticsearch and learning the right way to get auto-scaling resources (both up and out) to function seamlessly with zero downtime. 

Being able to easily spin up and down entire environments repeatedly gave us a huge advantage for disaster recovery and infrastructure efficiency tests, but as with all software, Terraform and Kubernetes have their own decay and debt associated. Operating system and security updates may be taken care of for us, but we still need to keep Kubernetes versions updated and Terraform plugin dependencies up to date. 

As we move into the next phase of our project, we’re handing over the DevOps responsibilities from our software team to our IT team, so stay tuned for ACMILabs blog posts on how they’ve experienced the move to infrastructure-as-code. 

If we had our time again, would we have used a SaaS provider like Heroku? It’s highly likely. 

First outage

We had our first outage after 63 days of uptime – a complete infrastructure rebuild thanks to Azure thinking our infrastructure was in a failed state due to a missing heartbeat. 

The good news is that it re-built itself with about 7 minutes of downtime without us having to touch a thing (thankfully the Database is using Azure hosted PostgreSQL). 

The bad news is that the outage highlighted a bug in our Lens Reader code that didn’t save and re-queue taps for any 5xx codes from XOS, so we lost the majority of Lens taps in the small time it took our web servers to go from building to ready to return a response. 

Hardware failures

It’s still early days for our production hardware, but throughout our two-year development we’ve had a very small number of failures. Given the number of Raspberry Pis we’ve deployed, we budgeted for a few dozen failures a year, but so far, we’ve only seen one confirmed hardware failure, and another likely due to shorting out the pins on a metal case. 

We also had one LED strip partly fail to produce the full colour range. 

Watch ACMILabs blog posts for updates as time goes on. 

Conclusion 

By focusing on the visitor experience and ease of deployment and replacement of our technology, ACMI has produced a rich renewed museum experience upon a reliable, sustainable and extensible infrastructure. 

Visitors have embraced the Lens, providing us with a huge amount of anonymous data to guide our business decisions into the future. 

Our process of iterative co-design, prototyping and testing is one we will refine and use again. 

What’s next? 

Open APIs

We love digging through our data, and we’d really like to share it with you all. The next phase of our development is to work on open public XOS APIs. These might include: 

  • /api/taps/ – our anonymous Lens taps API 
  • /api/constellations/ – curated ACMI objects and why they relate to each other 
  • /api/works/ – our ACMI object collection 
  • /api/creators/ – the people who created ACMI collection objects 
  • /api/labels/ – digital museum labels 
  • /api/videos/ – the ACMI video collection 
  • /api/images/ – the ACMI image collection 
  • /api/analytics/ – XOS analytics API 

If you can think of other ACMI data sources you’d like to play with, drop us a line. 

Data driven usability studies

Our Experience, Product & Digital team are planning usability studies to help guide museum updates, and future exhibition design. If you or your institution would like to take part in these, please get in touch.

Acknowledgements 

The ACMI team

Many ACMI staff members were involved in the planning of ACMI’s renewal project before the first software developer got involved, but here are the people who directly played a huge part in the technical solutions: 

  • Seb Chan – CXO – chief experience officer 
  • Greg Turner – CTO – chief technology officer (fixed term position for ACMI Renewal) 
  • Lucie Paterson – head of experience, product & digital 
  • Francesco Ramignimanager, software development and cloud services 
  • Katarina Bogut – project manager (fixed term position for ACMI Renewal) 
  • Pip Shea – UX designer 
  • Matt Millikan – senior writer & editor 
  • Linda Connolly – Vernon specialist & bug hunter 
  • Ali Haberfield – creative technologist 
  • Andrew Serong – creative technologist 
  • Benjamin Laird – creative technologist (fixed term position for ACMI Renewal) 
  • David Amores – creative technologist (fixed term position for ACMI Renewal) 
  • Sam Maher – creative technologist 
  • Simon Loffler – creative technologist 

The external teams

We wouldn’t have been able to develop such an immersive experience without the help of our partners: 

  • Second Story – exhibition and experience design  
  • Lumicom – lens reader production, physical installation, and power control 
  • Grumpy Sailorinteractive production and software development 
  • Mossterinteractive production and software development 
  • Liquorice – website design and development 
  • Boojum – creative technology software development 
  • Swinburne University’s Centre for Design Innovation – lens production and manufacture 

References 

  1. Luigina Ciolfi & Marc McLoughlin. (2011) Physical Keys to Digital Memories: Reflecting on the Role of Tangible Artefacts in “Reminisce”. Museums and the Web 2011 
  2. Sebastian Chan & Aaron Cope. (2015) Strategies against architecture: interactive media and transformative technology at Cooper Hewitt. MW2015: Museums and the Web 2015 
  3. Sebastian Chan & Lucie Paterson. (2019) End-to-end Experience Design: Lessons For All from the NFC-Enhanced Lost Map of Wonderland. MW19: MW 2019 
  4. Dave Patten. (2013) – bridging the divide between the online and in-museum experience. Museums and the Web 2013 

Cite as:
Loffler, Simon and Chan, Seb. "Technologies designed and built that underpin ACMI’s new experiences." MW21: MW 2021. Published April 23, 2021. Consulted .
https://mw21.museweb.net/paper/technologies-designed-and-built-that-underpin-acmis-new-experiences/