Topic modelling in the archives

There seems to be a lot of topic modelling going on at the moment. Any why not? Projects like Mining the Dispatch are demonstrating the possibilities. Tools like Mallet are making it easy. And generous DHers like Ted Underwood and Scott Weingart are doing a great job explaining what it is and how it works.

I’ve talked briefly about using topic modelling to explore digitised newspapers, something that the Mapping Texts project has also been investigating. But I’ve also been following with interest Chad Black’s use of algorithmic techniques, including topic modelling, to look for local variations amidst the legal system of the early modern Spanish empire.

As part of the Invisible Australians project, Kate and I are exploring the bureaucracy of the White Australia Policy. In particular, we’re interested in the interaction between policy and practice, between the highly-centralised bureaucracy and the activities of individual port officials. Like Chad, we’re interested in mapping local variations — to try and understand the bureaucracy from the point of view of an individual forced to live within its restrictions.

I recently gave a presentation about the project at Digital Humanities Australasia (post coming soon!), and in preparation I decided to try a few topic modelling experiments. They were very simple, but I was impressed by the possibilities for exploring archival systems.

The problem I started with was this. The workings of the White Australia Policy are well documented by records held by the National Archives of Australia. Some series within the archives are specifically related to the operations of the policy — such as those containing many thousands of CEDTs. But there are also general correspondence series created by the customs offices in each state, as well as the Commonwealth Department of External Affairs which administered the Immigration Restriction Act (responsibility was later taken by the Department of Home and Territories and it’s successors). These general correspondence series are important, because they often include details of difficult or controversial cases — those that required a policy judgment, or prompted a change in existing practices. But how do you find relevant files within series that can contain large numbers of items?

Series A1, for example, is a correspondence series created by the Department of External Affairs. It contains more than 60,000 items. Past research tells us that amongst these 60,000 files are records of important policy discussions relating to White Australia. But these files tend to be labelled with the names of the people involved, so unless you know the names in advance they can be difficult to find.

Mitchell Whitelaw’s A1 Explorer, part of the Visible Archive project, lets you to explore the contents of Series A1 in a easy and engaging way. But while the A1 Explorer provides new opportunities for discovery, it doesn’t offer the fine-grained analysis we need to sift out the files we’re after. And so… topic modelling.

The process was pretty simple. While I can dip into my bag of screen-scrapers to harvest series directly from the NAA’s RecordSearch database, there was already an XML dump of A1 available from data.gov.au. So I extracted the basic file metadata from the XML and wrote the identifiers and titles out to a text file, one item per line. Following the instructions on the website I then loaded this file into Mallet:

/Applications/Mallet/bin/mallet import-file --input ./A1.txt --output A1.mallet --keep-sequence --remove-stopwords

Then it was just a matter of firing up the topic modeller:

/Applications/Mallet/bin/mallet train-topics --input ./A1.mallet --output-state ./A1.gz --output-doc-topics ./A1-topics.txt --output-topic-keys ./A1-keys.txt --num-topics 40

Again, I just followed the examples on the Mallet site.

Once it was finished I opened up A1-keys.txt to browse the ‘topics’ Mallet had found. The results were intriguing. There are a large number of applications for naturalisation in A1, so it’s no surprise that ‘naturalisation’ figures prominently in a number of the topics. What was more interesting was the way Mallet had grouped the naturalisation files. For example:

naturalization christian hans hansen jensen petersen andersen nielsen larsen christensen johannes jens niels pedersen andreas johansen martin jorgensen

and

naturalisation certificate giuseppe salvatore frank la leo samios spina sorbello leonardo fisher natale patane torrisi barbagallo luka rossi ross

Based on the co-occurrence of names within the file titles, Mallet had created groupings that roughly reflected the ethnic origins of applicants. It makes sense when you think about what Mallet is doing, but I still found it pretty amazing.

Mallet also found clusters around the major activities of the department, such as the administration of the territories. But of most interest to us was:

1 0.55539 passport ah student exemption students lee wong chinese young deserter education sing wing chong readmission son hing chin wife

The Chinese names alongside words such as ‘readmission’ and ‘wife’ suggested that this topic revolved around the administration of the White Australia Policy. This was easy to test. In A1-topics.txt was a list of every file in the series and their weightings in relation to each of the topics. I wasn’t sure what was a reasonable cut-off value to use in assessing the weightings, but after a bit of trial and error I fixed on a value of 0.7. I then just extracted the identifiers of every file that had a weighting greater than 0.7 for this topic. I used the identifiers to build a simple web page that Kate and I could browse. I also included links back to RecordSearch so we could explore further.

Browse the full list

It’s a pretty impressive result. Instead of fumbling with the uncertainties of keyword searches, we now have a list of more than 1,300 files that are clearly of relevance to Invisible Australians. There’s a few false positives and there are likely to be other files that we’ll have missed altogether, but now we have a much clearer picture of the types of files that are included and how they are described.

And that was at my first attempt, simply using the default settings. I’m now starting to play around with some of Mallet’s configuration options to see what sort of difference they make. I’m also keen to try out GenSim, a topic modelling package for Python.

I’m really excited about the possibilities of these sort of tools for analysing the contents of archival descriptive systems, something I mentioned in my Digital Humanities Australasia paper. Much more to come on this I suspect…

the real face of white australia

In many of the presentations I’ve given in recent times I’ve managed to include a question raised by Tim Hitchcock in his chapter in The Virtual Representation of the Past. Tim asks:

What changes when we examine the world through the collected fragments of knowledge that we can recover about a single person, reorganised as a biographical narrative, rather than as part of an archival system?

The idea of turning archival systems on their head to expose the people rather than the bureaucracy is what motivates Kate Bagnall and I in our attempts to make the Invisible Australians project into a reality.

Invisible Australians aims to liberate the lives of those who suffered under the restrictions of the White Australia Policy from the rich archival holdings of the National Archives of Australia and elsewhere.

We always knew that the portrait photographs, included on a range of government documents, would provide a compelling perspective on these lives, but we weren’t quite sure how we were going to extract them. Up until last weekend, I’d assumed that we’d develop a crowdsourcing tool that contributors would use to mark-up the photos.

Now I’m not so sure.

In the space of a couple of days I’ve extracted over 7,000 photographs and built an application to browse them — here is the real face of White Australia

How did I do it? Paul Hagon, at the National Library of Australia, gave a presentation last year in which he explored the possibilities of facial detection in developing access to photographic collections. The idea lodged in my brain somewhere and a few days ago I started to poke around looking to see how practical it might be for Invisible Australians.

It didn’t take long to find a python script that used the OpenCV library to detect faces in photographs. I tried the script on a few of the NAA documents and was impressed — there were a few false positives, but the faces were being found!

So then the excitement kicked in. I modified the script so that instead of just finding the coordinates of faces it would enlarge the selected area by 50px on each side and then crop the image. This did a great job of extracting the portraits. I tweaked a few of the settings as well to try and reduce the number of false positives. Eventually, I developed a two-pass system that repeated the detection process after the image had been cropped and it’s contrast adjusted. This seemed to weed out a few more errors. You can find the code on GitHub.

Once the script was working I had to assemble the documents. I already had a basic harvester that would retrieve both the file metadata and digitised images for any series in the NAA database. Acting on Kate’s advice, I pointed it at series ST84/1 and downloaded 12,502 page images.

All I then had to do was loop the facial detection script over the images. Simple! The only problem was that my 3-year-old laptop wasn’t quite up to the task. As it’s CPU temperature rose and rose, I was forced to employ a special high-tech cooling system.

Keeping my laptop alive...

But after running for several hours, my faithful old laptop finally worked it’s way through all the documents. The result was a directory full of 11,170 cropped images.

The results

There were still quite a lot of false positives and so I simply worked my way through the files, manually deleting the errors. I ended up with 7,247 photos of people. That’s a strike rate of nearly 65% which seems pretty good. The classifier, which does the actual facial detection, was probably trained on conventional photographs rather than on the mixed-format documents I was feeding it.

Then it was just a matter of building a web app to display the portraits. I used Django for the backend work of managing the metadata and delivering the content, while the interface was built using a combination or Isotope, Infinite Scroll and FancyBox.

It’s important to note that the portraits provide a way of exploring the records themselves. If you click on a face you see a copy of the document from which the photo was extracted. A link is provided to examine the full context of the image in RecordSearch. This is not just an exhibition, it’s a finding aid.

What next? There are many more of these documents to be harvested and processed (and many more still yet to be digitised). I will be adding more series as I can (though I might have to wait until I can afford a new computer!). I’d also like to explore the possibilities of facial or object detection a bit more. Could I train my own classifier? Could I detect handprints, or even classify the type of form?

In the meantime, I think our experimental browser helps us to understand why the Invisible Australians project is so important — you look at their faces and you simply want to know more. Who are they? What were their lives like?

UPDATE: For more on the photos and the issues they raise, see Kate Bagnall’s posts over at the Tiger’s Mouth.

Hacking a research project

Amongst the holdings of the National Archives of Australia are some of the most visually arresting documents you’ll see — thousands and thousands of forms from the early decades of the twentieth century, each with a portrait photograph and palm print, each documenting the movements of a non-white resident. Along with many other certificates, regulations, correspondence and case files, these forms are part of the massive bureaucratic legacy of the White Australia Policy.

These certificates allowed non-white Australians travelling overseas to re-enter the country. NAA: ST84/1, 1906/21-30

But these are more than just interesting looking pieces of paper, they are snapshots of people’s lives. The forms capture data about an individual’s place of birth, physical characteristics and more. Over time a person might have submitted several of these forms, so by bringing them together we could trace their history, we could map their journeys — we could even watch them age.

The system which sought to render non-whites invisible has captured and preserved the outlines of their lives. By extracting and linking this data we could build a picture of another Australia, an Australia in which non-white residents lived, loved, struggled and succeeded, despite the impositions of a repressive regime.

I talked about these records at the AAHC conference last year, inspired in part by Tim Hitchcock’s chapter in the Virtual Representation of the Past. Tim Hitchcock argues that technology can allow us to restructure archives, looking beyond institutional hierarchies to the lives of individuals contained within:

What changes when we examine the world through the collected fragments of knowledge that we can recover about a single person, reorganised as a biographical narrative, rather than as part of an archival system?

I don’t know, but I’d like to find out.

During my AAHC talk, Dave Lester suggested that the extraction of data from these forms might make a good crowdsourcing project. It’s a great idea. As you can see, the data is generally well-structured and legible, it should be possible to construct a simple series of forms that would allow volunteers to transcribe the data. The next stage would be to try and match identities across forms. That’s more complicated, but projects such as Tim Hitchcock’s London Lives show how users can construct identities by connecting a range of historical documents.

Then there are connections to resources outside of the archives — photographs, local histories, newspapers, genealogies, cemetery registers and more. By keeping our system open and extensible, and by working with others to help them expose their information in standard ways, it should be possible to develop the framework for an evolving mesh of biographical data.

So, how do we get started? This is the point when you usually have to start thinking about money — how can I fund this? In Australia that generally means a journey into the arcane world of the Australian Research Council. The ARC suffers from all the problems of a peer-reviewed system, but added to this is a rather antiquated notion of what research is.

In the rules covering each of the main schemes it’s clearly stated that the ‘compilation of data’ and the ‘development of research aids or tools’ are not supported. I spend part of my life working for the Australian National Data Service, an organisation that seeks to highlight how the sharing and reuse of data can open up new research possibilities. The ARC, however, seems to think that data has little value beyond its original research context.

Of course you can still mount a case for such activities. Applicants for a ‘Discovery’ grant can argue that data creation is integral to their project and provide details of the ‘specific research questions to be addressed’. But what if you don’t yet know what the questions are? Part of the point of a project such as this is to try and find out what questions we are able to ask. Until we start to compile, link and explore the data, the ‘specific research questions’ will be little more than convenient fictions, dreamt up to satisfy the prodding of peer reviewers.

Tom Scheinfeldt wrote a fantastic blog post recently, responding to concerns about the failure of many digital humanities projects to make arguments or answer questions. Drawing examples from the history of science, Tom argues:

we need to make room for both kinds of digital humanities, the kind that seeks to make arguments and answer questions now and the kind that builds tools and resources with questions in mind, but only in the back of its mind and only for later. We need time to experiment and even… time to play.

The ARC does not fund play.

You might imagine that the ARC’s infrastructure funding scheme would offer more hope for a project such as this. And yes, there are many worthy projects involving databases and online tools that have been supported in this way (and I have benefited from some of them!). But it seems that in the minds of research funders infrastructure is always BIG. Grants start at $150,000, and applications are expected to involve multiple institutional partners. Projects have to be scaled up to fit the ARC’s definition of infrastructure, often resulting in complex, lumbering, long-term projects whose products are out of date by the time of their release.

There is no room in our current infrastructure models for agile, innovative, user-focused digital toolmakers seeking small amounts to experiment with apps, prototypes, datasets or visualisations. I often look with envy upon the US National Endowment for the Humanities Digital Humanities Start-Up Grants.

In any case, neither I nor my partner in this endeavour, Kate Bagnall (@baibi), are currently in academic positions, so our chances of gaining any sort of research funding are next to none. We have the expertise — Kate has spent many years researching Australian-Chinese families and knows the records back-to-front, while I just can’t help playing with biographical data — but is that enough? How can you mount an ongoing research project without institutional support, research funding and the various badges and signifiers of academic authority?

I don’t know that either, but I have some ideas.

Ah Yin Pak Chong

Mrs Ah Yin Pak Chong. NAA: ST84/1, 1907/321-330

I didn’t manage to get a contribution together for Dan Cohen and Tom Scheinfeldt’s crowdsourced-in-a-week book, Hacking the Academy, but watching the process from afar I did begin to wonder about how we might hack the way we build and run major research projects. This is what I have in mind:

  • To strip down the large, lumbering beasts and design projects that are modular and opportunistic — able to grow quickly when resources allow, to bolt on related projects, to absorb existing tools.
  • To follow the data freely across technological and institutional boundaries, developing open networks that invite participation and use.
  • To develop a floating pool of collaborators, both inside and outside of academia, who are able to come and go, contributing whatever and whenever they can.
  • To make everything public, accessible and standards-compliant, so that even if the project stalls it could be picked up and developed by someone else.

Most of all I just want to be able to do it. I don’t want to second-guess the ARC. I don’t want to spend months negotiating with potential partners or begging for an institutional home. I want to build, experiment and play. I want to make a start.

So that’s what we’re going to do.

We have a topic, plenty of raw materials, some basic principles and the beginnings of a plan. We even have a name — Invisible Australians: Living under the White Australia Policy.

As the project develops, I’ll be blogging here about some of the technical stuff, while Kate will be exploring the content over at the tiger’s mouth. I hope to have a prototype of the transcription tool ready to demo at THATCamp Canberra, while Kate is already at work putting together guides on using the records and developing an Omeka site that follows a number of Chinese-Australian families through the archives.

Can we hack together a major research project? Let’s find out.