Iteration, Adaptation, & Motivation: Outcomes of the Mapping Historic Skies Crowdsourcing Effort at the Adler Planetarium

Jessica BrodeFrank, University of London, UK, Samantha Blickhan, Zooniverse, USA

Abstract

As Mapping Historic Skies (a collaboration between Zooniverse.org and Adler Planetarium) nears completion, the Adler Collections and Zooniverse teams look forward to presenting their findings: how the project has helped the Adler sort through collections pieces to establish a front-facing constellation database, how volunteers have engaged with the project, the expansion of audiences between the gallery interactive and online, volunteer motivations and project design, and the future projects inspired by the MHS talkboards. In the session we will discuss the successes and challenges of creating a collections based collaborative crowdsourcing project, how to incorporate results from crowdsourcing projects into front-facing usable products, as well as specific lessons learned from MHS. 

Keywords: application, interactive, outcomes, crowdsourcing, design, engagement

Authors: Jessica BrodeFrank, Dr. Samantha Blickhan, Becky Rother, and Dr. L. Clifton Johnson

At the 2019 Museums and the Web Conference in Boston, MA, members of the Collections and Zooniverse teams at Chicago’s Adler Planetarium participated in an App Crit (as defined on the Museums and the Web website App Crits are a session type in which, “Recent apps – iOS and Android, touch tables, kiosks and bespoke hardware (as distinct from websites including mobile sites) – are critiqued by an expert panel of peer reviewers. Everyone learns from the process and takes away tips that can be applied to other apps.” https://mw18.mwconf.org/session/app-crit/) for their collaborative project, Mapping Historic Skies (MHS). The MHS project invited visitors to the Adler Planetarium to use an interactive app to help Adler staff process images of historical star maps (Blickhan, BrodeFrank, and Rother. MuseWeb 2019). The ultimate goal of this project is to create a database of constellation depictions in art from Adler collections, allowing users to browse centuries of depictions for any constellation. The project was built using Zooniverse, the world’s largest platform for online crowdsourced research, and the MHS interactive (featured onsite at the Adler) was hosted on the Zooniverse mobile application. An online component on Zooniverse.org included an additional workflow—once images had been segmented into individual constellations via the Adler interactive, online volunteers were asked to identify those constellations by following a decision tree that guided them through the identification process without requiring participants to have any previous knowledge of constellations or astronomy.

In November 2019 the app debuted in a new exhibition at the Adler, Chicago’s Night Sky, and our concept of an interactive crowdsourcing project—designed for an in-exhibit experience —came to fruition. When the Adler closed in March 2020 due to COVID-19 safety concerns, the project pivoted so that both workflows were available online. As stated in previous publications, the online-only approach resulted in higher engagement rates compared with the in-gallery interactive, which led our team to consider volunteer motivations for participating in crowdsourcing projects in person as opposed to online (BrodeFrank et al., Theory and Practice).

As MHS nears completion, we present our findings here: how the project has helped the Adler sort through collections pieces to establish a front-facing constellation database, how volunteers have engaged with the project, the expansion of audiences between the gallery interactive and online, volunteer motivations and project design, and the future projects inspired by MHS. We will discuss the successes and challenges of creating a collections-based collaborative crowdsourcing project, some best practices for incorporating results from crowdsourcing projects into front-facing usable products, as well as specific lessons learned from the Mapping Historic Skies project. 

 

“Mapping Historic Skies” Onsite Experience vs. Zooniverse.org Experience

The Mapping Historic Skies (MHS) project began planning in 2018 as the first collaboration between the Adler Planetarium’s Collections and Zooniverse teams. As the capabilities and reach of the Zooniverse platform have grown to include cultural- and digital-humanities-aimed projects, the Adler Collections team has sought to find ways to engage members of the public with curatorial and collections work via the Adler’s collection of rare books, celestial cartography and three dimensional objects. As the Adler planned to open Chicago’s Night Sky, the MHS project was born, looking to provide guests with an unique experience. The team began planning for an in-gallery interactive that encouraged guests to participate in the creation of a collections database that would help show the way constellations have been represented in various artistic formats, reflecting different cultural contexts and time periods. Evolving from an NEH prototyping grant project, Digital Historic Skies (NEH Grant #HD-51957-14), MHS was conceived around these goals of producing a database and creating an engaging experience for guests, and was centered around the capabilities of the Zooniverse.org platform (BrodeFrank et al. Theory and Practice). 

A large proportion of the planning for MHS focused on how to take an online experience like Zooniverse and make it an interactive experience in a museum gallery. The onsite interactive only ran from November 2019 to mid-March of 2020 before museum closure due to COVID moved the project online. This pivot, however, did illuminate certain advantages to the online-only workflows.  When looking at the segmentation workflow from the launch to closure (launch of MHS onsite was November 19,  2019 and the Adler closed to the public on March 15, 2020) we saw that only 7000 classifications were made onsite; while 26,000 were made during that same time online (BrodeFrank, Blickhan, et al. Museological Review: (Re)visiting Museums). During that time period roughly 60,000 guests came through the Adler, appearing to show there was less engagement from our onsite audience than from the group of 7,250+ dedicated classifiers on Zooniverse. 

Though we had originally planned to hold back a portion of the data to run onsite when a reopening could happen, looking at these statistics and the uptick in volunteer engagement during the early months of the pandemic closure convinced our team to move forward with completing the project. At the time of launch, the MHS project was receiving about 1,400 classifications a day for the first four weeks, with classifications tapering off to around 500 classifications per day and holding steady for the next three months. Upon closure we saw these classifications steadily increase with peak days over 2,300 classifications. MHS retained an average daily classification rate of around 1,000 through mid-July. This uptick in participation is linked to a factor of 2 to 5 Zooniverse-wide increase in classification rate during the early pandemic periods of people looking for online activities (Johnson, AAS237, “Recent Growth” Graph), and in particular online educational activities, with MHS and Zooniverse being included on lists published by the National Endowment for Humanities (https://edsitement.neh.gov/teachers-guides/digital-humanities-and-online-education) and on public access radio programs (https://will.illinois.edu/21stshow/story/virtual-museums-to-explore-from-home), respectively. Not only did these participation rates assure the team that continuing the project virtually was the correct choice, but the increased numbers of volunteers allowed us to lower retirement rates that had initially been set higher for onsite users (for fear that guests may play with the interactive without understanding what the tasks were, as observed during early testing). The increased engagement and reduced retirement limits accelerated the rate of the project, moving the finish date from around December 2021-February 2022, to March of 2021. 

 

Talk Boards: Volunteer Engagement

A unique feature of Zooniverse is the ‘Talk’ message board system, included with each project on the platform. Since its early days, Zooniverse has incorporated message boards into projects in order to support direct engagement between the research team and the volunteer community. ‘Talk’ was also designed explicitly to enable participants to flag unusual or rare objects they came across while classifying, to enable ‘serendipitous discovery’. This is an acknowledgement that, though volunteers may initially engage with a dataset with a specific task or instruction in mind, platforms which support public volunteer engagement need a built-in way for volunteers to point out additional information to researchers running the project, ask questions, or share interesting moments from their experience on the platform (Blickhan et al. XXIXth IAU General Assembly). 

Since the project’s launch in November 2019, more than 7,000 Zooniverse volunteers have posted 3,600 comments in over 2,000 discussion threads. The MHS Talk boards are split into themed groups, including: a Notes board, for posts about individual images in the project; Constellation Chat, for questions/discussion about constellations; Troubleshooting, for help or bug reporting; and Introduce Yourself, for volunteers to share information about who they are and how they came to participate in the project.

The subject-specific Talk board (Notes) is by far the most popular; volunteers have the option to go directly to this Talk board after they submit a classification on MHS, and the posts typically contain questions about what a constellation is or isn’t, reports of any image issues that need to be fixed by the project team, and commentary on the quality of art in the images.

The Talk boards also helped our team to refine the project workflows based on volunteer feedback. Early in the project the team began to notice that the hashtag #checkthebox was trending on the Notes board. The increased frequency of this hashtag led us to examine the workflow setup, and we realized that the identification workflow was missing a way for volunteers to indicate problems with data aggregation outcomes from the segmentation workflow. The identification workflow dataset is populated by the cropped images produced by aggregating the segmentation workflow results. Depending on the depictions within the parent (pre-crop) image, sometimes the aggregate cropped images were too large (including multiple constellations, causing confusion about which to identify) or too small (cutting off the constellation images). 

Zooniverse supports iteration and edits to active projects, so to help solve the #checkthebox issues, we added a question to the Identify workflow: “If you think the box drawn around the constellation needs to be reviewed (#checkthebox), select one of the options below. If you think the box is fine, leave the question unanswered and submit your classification.” Volunteers were given two possible answers: “box needs adjustment to better match constellation (too big or too small)” and “box includes multiple constellations.” Once images were retired, we were able to query the data export to find the images that had been flagged via this question, and the project team could examine the image file and make any necessary changes.

The MHS Talk board posts have also helped to benefit the entire Zooniverse platform. A series of Talk posts late in the project alerted us to the fact that ‘empty’ images were appearing in the Identify workflow, featuring small volunteer-generated annotations without any constellations to identify. Analysis of this issue showed that there was a problem with drawing annotations submitted via the Android mobile app, a bug which could impact any Zooniverse project that used drawing tools. Feedback from MHS volunteers identified individual classifications affected by the bug, allowing the project team to remove the ‘empty’ images from the Identify workflow and allowing the Zooniverse team to find and fix the underlying problem.

Another issue brought to our attention via the Talk boards was from volunteers who were uncertain how to tag images based on language. One issue that arose early on in the Identify workflow was the introductory question which asks, “Do you know the name of the constellation shown?” If volunteers respond, “Yes,” they are able to write the constellation name in a free text field. Responses to this question included a mix of Latin constellation names, English constellation names, and occasionally constellations written in French, Italian and German. This was due to a subset of the images in the workflow including captions in the aforementioned languages. Volunteers were often unsure whether to transcribe the name they saw in the image next to the constellation, or to write the name given in the Field Guide based on the International Astronomical Union’s (IAU) standard nomenclature (https://www.iau.org/public/themes/constellations/).

To help with this confusion, we added clarifying language in posts on the Talk Boards as well as within the project FAQ and Field Guide to clarify which names we sought. We also provided the alternative names within this guidance to help volunteers who may not be familiar with the IAU naming conventions. We also added a quality assurance step to the data post-processing to standardize the spelling and the name used for each identified constellation before embedding the names into the cropped images and adding them to the database.

Volunteer feedback on the Talk boards also helped us to expand the Field Guide. When we launched the project the Field Guide included the 88 constellations recognized by the IAU for the purpose of identifying and cataloging celestial objects. After we launched the Identification workflow, we quickly noted that the Talk boards were filling with questions about additional constellations that were not included in this Field Guide, and therefore were not listed as options in the identification workflow. We began to see hashtags like #obsolete and #defunct signaling constellations that were no longer considered official. Similarly, we saw that the artistic renderings of certain constellations such as Perseus holding Medusa’s head, or the lion held by Hercules were being cropped so that held objects like Medusa and the lion pelt were being mistakenly identified as separate constellations. 

As these outlier constellations were identified we were able to add them to the Field Guide, as well as to the Identify workflow, and flag completed images which may have been misidentified due to an incomplete set of options. We now have 104 possible constellations that include many obsolete constellations. We have also added to the Field Guide descriptions of constellations that may include additional pieces (e.g. the Medusa head being a part of Perseus, rather than its own constellation). The Talk boards have helped our team learn about the breadth of constellation depiction styles, as well as how these styles can affect volunteer interpretation, and we have updated our descriptions in the Field Guide in response. But it is important to note that it would have taken us much longer to troubleshoot this issue if we had not had a direct line of communication with the MHS volunteers.

 

Lessons from the Early Data Exports

With each iteration to the project we have seen a direct effect on the resulting data. In early results, the major issues were with depictions of three dimensional objects (such as globes), depictions of Islamic depictions, and depictions of obsolete constellations. As we updated the workflows and Field Guide to be more inclusive of the constellations featured in the project, and focused on daily check-ins on the Talk boards, we were able to communicate these changes directly to volunteers. This showed participants not only how to deal with confusing or unfamiliar subjects, but also that their feedback was directly impacting the project structure. This feedback-iteration cycle produced higher-quality results, which then allowed us to lower the retirement limits. When we launched the project, 50 people needed to classify an image before it was considered complete and removed from the active workflow. After the feedback-iteration cycle began, we were able to lower that number to 20 in October 2020; lowering the limit for our 4th-12th subject sets.

As retirement limits have been lowered for each workflow, we have also been able to track the level of volunteer consensus, as well as accuracy of results. The data reduction code (data reduction uses a combination of custom Python code and the zooniverse/aggregations-for-caesar (https://doi.org/10.5281/zenodo.2562860) software package), written by one of the authors (Johnson), includes a consensus score as part of its output, or the general level of agreement that volunteers had around a given task. The consensus score has helped guide the project team’s quality assurance (QA). Of the five subject sets that have already been exported from the project—3,558 identified, cropped images—the consensus score has averaged right around .608 (out of 1). Despite the number of views changing from 50 for the first 3 subject sets (2,372 images) to 20 for the last 2 subject sets (1,186 images), the consensus score has not changed. Because there was no substantial drop in data quality when we lowered the retirement limit, we chose to keep the retirement set at 20 for the remainder of the project. 

A frequently-cited reason for reluctance to incorporate crowdsourcing into research projects is concerns around the quality of results. For example, if an aim of crowdsourcing is to save research teams time, it does not help if the team has to spend a lot of effort on quality assurance. The consensus scores produced by the post-processing aggregation step were a helpful metric for the quality assurance process. As we processed the 3,558 cropped images  from the first five subject sets, we began to see that the majority of incorrectly identified constellations had a consensus marker below 0.4. Upon further investigation we realized the majority of the incorrectly identified constellations encompassed half a dozen possible constellations: Cepheus, Bootes, Hydra, Draco, Aquila, and Cygnus. These six constellations accounted for 93% of the constellations found to be incorrectly identified, with over 50% of these being a misidentification of a given constellation as Cepheus. 

Looking at these ‘problem’ constellations, and in particular at the misidentification of Cepheus, an interesting issue arose. As the team continued their QA process—and the majority of misidentified images continued to be attributed to Cepheus—we realized this was due to the way that volunteers had accidentally cropped the parent image to cut off parts of the constellation. As previously discussed, there were some issues in the segmentation workflow with volunteers cropping out individual pieces of constellations, typically due to lack of familiarity with what constituted the ‘whole’ constellation—this was largely solved by adding more information and examples to our Field Guide. However, even with the additional information, some of these ‘partial’ constellation images were still sent to the Identify workflow, often resulting in images of disembodied body parts: arms, legs, backsides, torsos. The majority of these body parts were identified as Cepheus. We believe this was due to the fact that, of all the humanoid male constellations to choose from, Cepheus was the only option for a male not holding an object. 

Cepheus, the mythological husband of Cassiopeia and father of Andromeda, is typically depicted as a man wearing a crown. He is sometimes—but, crucially, not always—holding a scepter. As our volunteers were faced with identifying body parts, we believe they followed the instructions and guidance to the best of their ability, which often resulted in Cepheus. This was an interesting discovery, and one we are also still considering as we plan the creation of the database. As the segmented images have been exported from the project, paired with their identification results, we have followed a particular process in terms of data management: 

  1. Embed the image files with the object accession/call number for their parent object, as well as with the identified constellation name. At this stage we also flag any incorrect identifications and change them to the correct ones. 
  2. Rename the image files to include the preface code “COL_MHS_” before the accession number/call number of their parent image. This ensures that we can not only sort all the cropped images generated by the project, but also that we can tell which cropped image came from which object in the collection. 
  3. Add files to the Piction Digital Asset Management System (DAMS). We chose to ingest all the cropped images to keep a record of what was produced in the project. 
  4. Finally, we manually add the cropped images to individual workspaces (folders) within Piction, sorting them by constellation—all the Hercules images are in a ‘Hercules’ folder, &c. 

So far, it is this final stage that involves the most QA. Of the 3,554 cropped images created thus far, 3,500 have been sorted into the database folders, with less than 1.5% being left out. These 54 image files were deemed unusable, as they only contain a small portion of a given constellation. The majority are humanoid constellations that only represented single body parts (often labeled as Cepheus, as described above). As we continue to process the data and add more files to the DAMS, and subsequently to these individual constellation workspaces, we are developing criteria for what should be included in these workspaces. At present, we are considering issues with bad segmentation, duplicate images (e.g., the Adler collections include multiple versions of the same book), and low-resolution images. 

As we work to process these exports from the Zooniverse platform we have been impressed with the success of the project. Very few constellations have been misidentified—as of writing, the Identification results have a 98% accuracy rate—and those that have been incorrectly named are typically due to an issue with the segmentation workflow. The remaining data complications, in particular the disembodied misidentifications of Cepheus, will serve as lessons for future projects; in particular the possibility of more beta testing with a wider group, or examining crop exports earlier to iterate the design of the instructions sooner. These lessons include front-loading requisite constellation information to ensure volunteers are successfully segmenting parent images without cutting off parts of constellations, as well as writing more effective Tutorials for identifying unfamiliar or non-standard constellations.

 

Crowdsourcing Collections: MHS Results and a Future Database

As we consider the outcomes of the project, we are hoping to use our experiences to recommend best practices to other teams considering incorporating crowdsourcing into their digital museum portfolio. One aspect of crowdsourcing that is often overlooked is the amount of staff time and effort needed to run a successful, healthy project. Our Adler-Zooniverse team members cautioned us early on to reserve staff effort for project ‘care and feeding’, particularly in the case of monitoring Talk boards, and indeed they were one of the most engaging and influential parts of the project. Not only did Talk monitoring involve answering questions and noting issues, but also encouraging continued engagement with and commitment to the project by our volunteers. Even with the recommendations from the Adler-Zooniverse team in mind, the Collections team had to invest more time than originally anticipated to monitor Talk. What started as a twice-weekly check in became a daily check in at the height of the project. As we look to future projects, the Collections team has used this experience to better adjust time commitments to this crucial part of the project process.

Similarly, we have learned firsthand to budget additional time for data export, aggregation, and processing. Because MHS uses two workflows, it requires twice the data processing time. As we look to future projects, preparing for aggregation (and ease of aggregation) is now a larger concern. We have also learned that QA will likely take more time than anticipated, and indeed consider how we can incorporate ‘internal QA’ steps earlier on in future projects, using early beta testing to test export and QA processes. We were not able to anticipate how many segmented, identified images would be generated by this project; we knew we had approximately 4,000 original images, some of which contained up to 80 individual constellations. The 3,554 individual cropped images we have processed thus far (from the first five subjects sets, out of 12 sets total) took about 70 hours to embed, rename, upload, and sort. If we can expect a similar amount of effort for the remaining seven subject sets, they will take an additional 100 hours to process.

We also want to recommend early interaction with data exports, in order to rethink and reconceptualize the desired final output. While processing the results of the first three subject sets we discovered a number of early issues (noted above). Because we found them early, we were able to iterate on the project’s task structure in order to produce high-quality, useful results for inclusion in the desired constellation database. 

Our next steps include continuing to add the identified images to the Piction DAMS once they are processed and have passed QA. Upon the completion of the MHS project, we will send out an email newsletter to the more than 7,250 volunteers who have participated to let them know how their contributions have helped the Adler—not only creating individual cropped and identified images, but also helping us get closer to our goal of a searchable database. As the files are uploaded to the DAMS and sorted by constellation, we hope to be able to create a ‘microsite’ within the Adler’s website, including hyperlinks for all 107 individual constellations that will access the individual Piction workspaces. This will allow anyone to browse through the different depictions of each individual constellation. Once this microsite is built and functioning we will also share this with the volunteers (via the project ‘Results’ page, and newsletter) so they can see and interact with the fruits of their labor.

The Adler is also excited to share with our volunteers that there will be a series of coloring books created based on the MHS constellation images. A large percentage of volunteer Talk comments centered around the visual choices or artistic styles of the constellation depictions (ranging from appreciation to confusion to humor), and the Collections team saw an opportunity to use this collection to further engage audiences and showcase Adler collections. We were able to directly incorporate Talk comments into the internal project proposal, demonstrating to the Adler executive team the interest from volunteers in visual arts and the strong appeal the constellation collection has for expert and non-expert audiences alike. The Adler has approved a run of seven constellation coloring books, to be themed around constellation type, depiction style, or narrative context (for example, animal constellations, defunct constellations, zodiac constellations, etc.). Each of these books will feature between 10 and 15 constellations, with two to four versions of a given constellation to show variation and evolution of design based on culture, location, and time period. These books will allow us to share more specific information on the origins of these constellations, the influence of their depictions (culturally and artistically), as well as quotes from our MHS Talk boards from users who helped identify defunct/obsolete constellations, or who helped engage volunteers and team members in discussions on the visual aspects of the constellations. 

Work on the coloring books will begin in March 2021, and we plan to send out the first two versions as PDFs to the MHS volunteers by May 2021 to thank them for their participation in the project and their contributions to the Adler Collections department. We hope this helps to solidify the relationship with these volunteers and encourages them to continue to engage with Adler projects on Zooniverse as well as with the Zooniverse platform in general. The Collections team in particular has seen the MHS project as substantial proof that crowdsourcing projects on Zooniverse is not just a way to help ease the burden of work for the team, but is also an effective tool for public engagement.

 

Lessons from MHS – The Future of Crowdsourcing with the Adler Planetarium’s Collection 

A major lesson the Collections team learned from Mapping Historic Skies was the reach and engagement that comes from using a crowdsourcing platform like Zooniverse. By bringing our collections to Zooniverse we were able to utilize their built-in community of more than 2 million volunteers. The project has had more than 120,000 unique pageviews since it launched, 7,250 of whom are registered Zooniverse volunteers: an audience who may not have otherwise ever sought out content from the Adler. This project taught us the importance of connecting with existing communities and to think of crowdsourcing as not just a tool for data processing but also as a way to engage with people in a new and active way.

As we process the project results, it is important to note that it is possible that the Collections team may have been able to crop the 4,000+ constellation depictions, identify the constellations, and create this database of individual constellations in roughly an equal amount of time, with lower overhead, than it took to build, launch, and run MHS. However, by doing this we would have missed the opportunity to connect our content with thousands of new volunteers, as well as the valuable lessons we learned from MHS about the appeal of the collection as an artistic resource. In many ways the process proved even more important than the results, which helped to solidify the Collections team relationship with the Adler-Zooniverse team and opened up more possibilities for future Zooniverse projects centered on our collections.

With MHS set to finish by March 2021, we started designing a second Zooniverse-hosted Collections project in Summer 2020. With an eye towards continuing to connect volunteers to the real work done in cultural heritage institutions, the second project looks to bring the volunteers into the world of cataloging. Tag Along with Adler will launch in early 2021 on Zooniverse, and is a metadata tagging project that looks to bring volunteer-generated metadata tags and terms into the Adler’s cataloguing system to improve searchability and findability of collections online by the public. In part due to the success of the constellation images in MHS, the Adler has planned to again focus on our visual arts collections; spanning across the rare book library, archival photograph collection, and works on paper collections. The new project will feature just over 1,100 of these visual arts objects, and we hope to further expose the Zooniverse community to what Adler Collections have to offer, as well as to introduce the community to the importance of language in online searches.

MHS taught us the importance of having a clear project goal, the importance of having assistive workflows for non-expert volunteers, engaging data sets, and regular staff presence on Talk boards. We have taken these lessons into consideration while planning the second project workflows, goals, staff time allotment and project teams. Tag Along with Adler will again feature two distinct workflows; one focused on verifying tags created by two different AI models and the other focused on adding volunteer-generated tags. The project’s About text, Help text, and Field Guide were designed to clearly state that the goal is to compile volunteer tags, therefore there are no ‘wrong’ answers, but certain stylistic requirements are necessary for ease of data aggregation. The project messaging clearly states why these tags are being collected, why the Adler wants to expand our cataloging data to be more consistent with user needs, and how these results will benefit both the institution and the volunteers.

We learned from MHS that transparency is key to attracting an engaged base of volunteers, and it is critical to have staff present on the Talk boards to maintain and encourage further engagement with the project. With this in mind, the Tag Along with Adler work plan includes extra Adler staff members for not only beta testing the project, but also for checking the Talk boards daily. We have also included additional time for data processing. We are excited to see how the Zooniverse community will engage with this project and in what ways we will iterate on the original workflow designs based on volunteer feedback.

As we continue to see the impact of MHS spark new projects such as coloring books, databases, and additional crowdsourcing efforts on Zooniverse, we are considering how the Adler Collections team can incorporate crowdsourcing projects as a central piece of departmental infrastructure. As we wrap up MHS and launch Tag Along with Adler, it is clear that these collections-based crowdsourcing projects are best able to engage volunteers and produce impactful results when they are considered prioritized institutional projects. By looking at these projects as mission-critical work, engaging communities and sparking connections, it is possible to then defend the need to allocate adequate resources, staff time, and timelines in order to create engaging content for users and usable end products for staff, volunteers, and the public. 

 

Works Cited:

Blickhan, BrodeFrank, and Rother. “Crowdsourcing Knowledge: Interactive Learning with Mapping Historic Skies.” MuseWeb2019. https://mw19.mwconf.org/paper/crowdsourcingknowledge-interactive-learning-with-mapping-historic-skies 

 

Blickhan et al., ‘Transforming research (and public engagement) through citizen science’, Astronomy in Focus, Vol. 1, XXIXth IAU General Assembly, 2018, p.2-3.

 

BrodeFrank, Blickhan, et al. “A Place for Everything: Adapting Content Across Platforms and Audiences – A Series of Case Studies at the Adler Planetarium.” Museological Review, (Re)visiting Museums. Vol. 25, 2021, p. 10.  

 

BrodeFrank et al. “Crowdsourcing Knowledge for Representation: Interactive Learning and Engagement with Collections Using Zooniverse’s Mapping Historic Skies.” Theory and Practice, the Emerging Museum Professionals Journal. Vol. 3, 2020. https://static1.squarespace.com/static/57866b6debbd1aedaf92e668/t/5ee64e72a5cd7a304ca5e01c/1592151674493/2020TP_BrodeFrank.pdf    

 

Johnson, L.C. “Unlocking Your Data Through People-Powered Research with the Zooniverse.” Poster presented at AAS237, 2020. https://aas237-aas.ipostersessions.com/Default.aspx?s=7B-53-B5-C4-AB-4A-11-06-A9-C2-A2-AF-7A-0B-2A-BD

Museums and the Web, “Session Type: App Crit.” MuseWeb2018. https://mw18.mwconf.org/session/app-crit/


Cite as:
BrodeFrank, Jessica and Blickhan, Samantha. "Iteration, Adaptation, & Motivation: Outcomes of the Mapping Historic Skies Crowdsourcing Effort at the Adler Planetarium." MW21: MW 2021. Published January 29, 2021. Consulted .
https://mw21.museweb.net/paper/iteration-adaptation-motivation-outcomes-of-the-mapping-historic-skies-crowdsourcing-effort-at-the-adler-planetarium/