Imani Danielle Mosley

music historian | digital & public humanist | bassoonist

A Notre Dame of the Future: Video Games, Digital Mapping, and Sonic Space

[Before I begin, if you’re interested on my thoughts on Notre Dame, musicology, and history as weapon, take a look at my Medium post]

By now, you’re probably aware of the fire at Notre-Dame de Paris on Monday. But what you might not be aware of are the plans to rebuild. It’s important to note that Notre Dame, like many Gothic cathedrals, have a history of ruin and rebuilding. There were medieval parts of the church but they also co-existed with additions and replacements from later centuries. There’s no reason to believe that Notre Dame will not be repaired, especially with the outpouring of donations committed to the construction. What I’m concerned about is how it will come about, the role that digital humanities plays in its construction, and what, perhaps we have lost forever.

Notre Dame has been thoroughly documented and there are three major entities that have not only mapped the space but may be crucial in its rebuilding: Ubisoft’s Assassin’s Creed Unity, Disney’s The Hunchback of Notre Dame, and the digital mapping project Mapping Gothic France. An unlikely trio if ever there were one but all are possibly keys to a restored Notre Dame.

Notre Dame from Assassin’s Creed Unity, gameplay from Emil4Gaming

Notre Dame from Assassin’s Creed Unity, gameplay from Emil4Gaming

Assassin’s Creed Unity is the eighth installment in Ubisoft’s franchise. Released in 2014, it is set in eighteenth-century Paris during the French Revolution with Notre Dame at its heart. In order to replicate the cathedral, level artist Caroline Miousse spent two years studying and modeling it: “80% of my time was spent on Notre Dame.” Those familiar with the game know its attention to detail in replicating historical places but the amount of records for Paris’s city plans as well as Notre Dame gave the artists more material than most to use to create what they called “a better Paris.” Now, people are looking to Ubisoft and its digital mapping to help with reconstruction, a reconstruction that’s grounded in the archives. Ubisoft has also pledged €500,000 (around $565,000) toward the rebuilding and is making Unity free to download for the next week. So if you want to see all of that work for yourself, check it out.

Disney has also pledged $5 million toward the rebuilding to help “the Hunchback’s home.” Like Ubisoft, Disney undertook a large mapping project in order to faithfully recreate Notre Dame for their 1996 animated film adaptation of Victor Hugo’s novel. Disney had a team in Paris that created all of the large scale animations of Notre Dame, spending time both inside and outside the cathedral, covering all angles hidden and visible. (For more information on just how much went into studying Notre Dame by the animators, turn on the audio commentary in the DVD version)

But what may be the most influential project to help with rebuilding is the Mapping Gothic France project out of Columbia University and Vassar College. Led by art historian Andrew Tallon, the project painstakingly mapped Notre Dame along with other places in France using lasers. (Sadly, Tallon died last year.) This project is one of many arising out of the growing digital humanities subfield of spatial humanities that uses various forms of digital mapping. One such example is Visualizing Venice, a mapping project from the art history department at Duke University that creates digital models of past Venices using archival material. As these projects increase, we see perhaps an unintended use for them in Notre Dame: not only to look into the past but to preserve the past into the future.

These projects provide optimism for the future of Notre Dame, spurred on by various technologies. But they are concerned with the architectural and structural elements of the cathedral. And just as cathedrals are a “symphony in stone,” as Hugo described Notre Dame, they are also places of immense sonic richness. And there is a fear that the acoustics of Notre Dame that inspired Léonin and Pérotin and helped bring about Notre Dame polyphony will never be recovered. In Michael Cuthbert’s piece for the Los Angeles Times, he writes about how we “still lack the technologies to re-create the sound of historic spaces that have been transformed, or destroyed.” Historical acoustics and their influence on music created in spaces is not quite as discussed as historical building. And Notre Dame, like St Marks in Venice, had an acoustic that engendered a particular type of musical composition. Polyphony, as it is understood in Western art music, is intimately tied up with acoustics; a space’s acoustics dictate how intelligibly voices singing different musical lines travels to our ears. And acoustics are affected by bodies, materials, temperature, all things that make spaces so unique and also make them difficult to recreate. My own work on English churches and acoustics confronts these same issues and struggles to provide definitive answers. My hope is that just as the Notre Dame fire has shown how incredibly useful spatial humanities and mapping can be, that it will also raise awareness around addressing the sonic as well as the architectural. We musicologists, especially, sometimes forget about sound (a strange thing to say, I know) but the history of sonic spaces is integral to our understanding of the creation of music. We talk about not having the ears of past listeners, which is true, we never will. But those spaces give us acoustical cues to the performative and resonant life of the past. And here, now, we have a timely reminder of what can happen when we do not prioritize acoustics. I grieve the loss of Notre Dame’s sound, a sound I experienced when I first went in 2010. It was with Notre Dame that I first started recording the inside of churches —bells, organs, and so on — something that I’ve continued to this day. My hope is that the new Notre Dame will contribute to its 800-year sonic history in ways that I can’t begin to fathom and that in some distant future, someone will be able to hear with my ears.

Breaking the Doodle: Bach, AI, and our quest for the human

Before I start this post, I just want to address where I've been. Since my last post, I finished my dissertation and will be defending it next week (!!!) Needless to say, that took all of my focus and effort. I've tried as much as I could to post interesting things over at @humanistmachine but I'm hoping that with this post, I will be able to get back into the regular swing of things. Thank you to everyone who has written me telling me how much they have enjoyed my writing, it is always noted and appreciated.

———

Bach.

Our Eternal (musical) Father in Heaven. My birthday twin (happy birthday to us!) Loathed by every musician who has to breathe in order to perform. Every scientist & mathematician's favorite composer. 269 years later, we are still as enthralled with him as ever, convinced that hidden within his compositions are the secrets of life, death, and the universe. I will admit my deep love for Bach before I continue to make my position clear; I have his family crest tattooed on my right arm. So my birthday is often spent thinking about Bach a fair deal (something I hated as a child, it's my birthday too, damnit). Today was no exception. I woke up this morning to news of today's Google doodle in honor of Herr Bach: an interactive game of sorts, created using machine learning where you could input a soprano melody line, crank up the machine, and watch as it generated some good old-fashioned Bachian eighteenth-century counterpoint.

In that moment, I had no idea that I would spend the entirety of my day talking about it but, honestly, I should have known better.

Today’s Google Doodle, in honor of Bach’s 334th birthday

Today’s Google Doodle, in honor of Bach’s 334th birthday

My copy of the Bach-Riemenschnieder, no house is complete without it

My copy of the Bach-Riemenschnieder, no house is complete without it

The doodle is adorable. It even comes with a Wendy Carlos Switched-on-Bach-style version that is equally adorable (though sadly there's no mention of that seminal work and its history) And on its face, it's pretty remarkable. The start up graphics tell you how the doodle was created and that 306 of Bach's compositions were used as a dataset…not bad (only Papa Bach can give us those kinds of data points, right?) But like the constant pedants we are, my friends — music theorists, musicologists, performers, etc — immediately saw the doodle as a challenge with the intention of breaking it. What does that look like, exactly? My friend and colleague (who teaches counterpoint) plugged in the chorale tune "Erhalt uns, Herr, bei deinem Wort" (number 72 in the Bach-Riemenschnieder book of 371 harmonized chorales if you're following along) to get a two-measure result that he then graded (he was grading counterpoint homework anyway). Another friend put in her own soprano line, nothing crazy, but the result ended strangely with an F-sharp in the soprano line sounding at the same time as an F-natural in the bass. Other friends put in well-known tunes for their soprano melody lines ("All Star," "Baby Shark," and so on) while theorists on social media dug their heels in, complaining about the game's inability to understand cross-relations, part-writing, and other staples of contrapuntal writing.

Counterpoint analysis graded, courtesy of Google and Douglas Buchanan

Counterpoint analysis graded, courtesy of Google and Douglas Buchanan

This post isn't really about whether or not machine learning did a good job in this exercise. I'm not a theorist and though I'm trained well enough to talk extensively about contrapuntal writing and when it is and is not successful, I'm not quite concerned with that. My hat is off to the team at Google Magenta (a project with which I'm familiar) who used TensorFlow.js and other tools to create this doodle. If you have read my previous posts, you know that this type of project is similar to the kind of work I was doing at NPR and I know what goes into creating something like this. 306 compositions is a relatively small dataset, generally speaking (in my opinion, not nearly enough for supervised learning) and I think that it has provided a lot of people with some enjoyment today.

What I am interested in is the AI community's, dare I say it, obsession with Bach. Because I believe that the continued use of Bach as jumping off point for artificial intelligence reveals some very important things, namely how we think about Bach, how we think about the compositional process, and AI's ethical relationship to art and even humanity. This doodle is not the first example of using neural networks & machine learning to replicate Bach. An article in MIT Technology Review titled "Deep-Learning Machine Listens to Bach, Then Writes Its Own Music in the Same Style" asks readers if we could imagine trying to distinguish music written by Bach and music written by a neural network trained in Bach's style. The neural network, called DeepBach, is the work of Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Labratories in Paris. Like the Google Magenta team, they started with 352 of Bach's chorales but in order to expand the dataset, they transposed them and the various permutations created a dataset with 2503 chorales. Their result, unsurprisingly, is far more nuanced than that of the Google doodle. And in their findings, half of their participants (that included professional musicians, music students and the like) thought the AI-generated music was by Bach 50% of the time, a very good number for such a project. While the research behind this is interesting, the text in the article sneakily gives away why Bach is the subject of such fascination: "These compositions have attracted computer scientists because the process of producing them is step-like and algorithmic. But doing this well is also hard because of the delicate interplay between harmony and melody."

Won’t you please think of the kittens?

Won’t you please think of the kittens?

Let's start with the first part of this thought. In a conversation I had today with a data scientist at Duke and our music librarian, I used the term "formulaic," a term I instantly regretted. I was not trying to say that Bach was formulaic in his compositions (something that our librarian clocked me on immediately), but that this understanding of counterpoint as rule-driven is problematic. Outside of musicology (and I mean that with a capital M, including theory and analysis) — and, in some places, within our discipline as well — there persists this idea of the mathematical Bach, the orderly Bach, the Bach who embodied the centuries-old rules of contrapuntal writing (if you're a music major, then you've seen the Bach parallel fifths kitten meme). Unsurprisingly, it is always more complex than that. Yes, from Gradus ad Parnassum and onward, we music students learn about the dos and donts of contrapuntal writing at our peril. What's good for Fux, is good for us, nein? But as I told our young data scientist today, this is and has always been a baseline (oh god, sorry for that pun) from which we can see and appreciate the creativity involved in composition of all things. I had one counterpoint teacher in college who often said that you can't break the rules if you don't know the rules. And Bach was a rule-breaker. There's no space here to get into the dizzying landscape that was Bach's compositional process. But it is important to place that brilliance alongside Bach's very real, very mortal life as a working composer. Even with this context, the mathematical Bach persists, enough to connect at least his chorale compositions to the modern idea of algorithms.

But what is to be said for melody and harmony as the second part posits? In Forkel's biography of Bach, he writes extensively on Bach's harmonic writing. In the following passage, he outlines how freedom is an essential quality of Bach's compositional process:

In such an interweaving of various melodies which are all so singing that each may, and really does, appear in its turn as the upper part, Bach's harmony consists … In his compositions in four parts, you may sometime even leave out the upper and lower part, and still hear in the two middle parts an intelligible and pleasing music. But to produce such harmony, in which the single parts must be in the highest degree flexible and yielding towards each other if they are all to have a free and fluent melody, Bach made use of quite peculiar means, which had not been taught in the treatises of musical instruction in those times, but with which his great genius inspired him.

Leaving the genius part at the end alone for now, this description of Bach's contrapuntal writing is as far away from step-like and algorithmic as possible. It is not quite as plug and play as the doodle might suggest. And this description of Bach comes at the beginning of the nineteenth century (contributing to the resurgence in Bach scholarship), it is not some twenty-first century portrait. Perhaps, the persistence of the mathematical Bach stems from Douglas Hofstadter's popular work Gödel, Escher, Bach though the book itself is more a study on cognition than anything else. Rahul Siddarthan, in his article on Bach and mathematics, attributes the appeal of Bach to scientists as the following:

The music of Bach seems to have a particularly wide following among scientifically or mathematically minded people … One could point to several reasons for this — the highly regular structures, the complex counterpoint, the use of simple themes to construct elaborate structures.

While we've already tackled some of these points earlier, this I think gets to the heart of the issue. How do we explain what music is, what it does, why it exists? Evolutionary biologists have been struggling with some of those questions for a long time (not to mention philosophers and mathematicians) There is a deep desire within humans to connect with pattern making and repetition, we know this. And with an art as non-representational as music, coming from who knows where, the mere presence of patterns gives us a grounding, a place to start from, a way to understand the non-understandable. And isn't that at the heart of artificial intelligence? The question has always been whether or not AI can re-present the most human aspects of our intellegence: creating art or music or humor. All of the other stuff is relatively simple by comparison. Language is just networks and networks are the computer's bread and butter (though deep learning has shown us that sometimes writing is hard, too) But to replicate the work of Picasso or Pollock? Bach or Mahler? That's the good stuff, that's the threshold. So with Bach, supposedly, you get the sublime ethereality of a brilliant composer (very human) with the math of a calculator. And this provides some really interesting questions and ethical debates for us to ponder. Does this desire to re-create art through AI act as another way for us to understand the unknowns of our own human mind? What purpose does this exercise serve? It can't be more than a digital party trick, it has to be at the service of something greater. I'm no philosopher so I won't attempt at these larger questions but as we approach (or are in) the posthumanist age, I think there is a desire, nay a need, to know more about what makes us human more than ever. Using AI perhaps is the long way round but it's a desire with which I can empathize.

This is all to say that I doubt that Bach will stop being the go-to for AI-generated music soon. I personally would love to see the rationale behind a Mahler dataset or a Sibelius dataset or any other composer, really (in truth, Schubert lieder could provide some fantastic data points), a composer to whom these ideas of rigidity are not so stubbornly attached. I think it would raise more questions than answers, more problems than solutions. But that's okay because it's one of the things that makes art, art.

"hello? housekeeping!"

This is just a short post to note that HitM is e x p a n d i n g !

We've added a new twitter account at @humanistmachine to serve a few different purposes: yes, it will aggregate all of my posts here but it won't be of those twitter feeds, I plan on talking to all of you! Twitter is where I hear about a lot of new developments around technology, musicology, digital humanities, and everything in-between & I want this twitter account to be a discursive space for all of that. Plus, as it takes a while to craft these posts, I'd like dialogue to continue in the in-between times. Honestly, there's so much going on that there needs to be some kind of catch-all, you know?

We're also a channel on Apple News! So if you digest your news that way, go on and add us! 

That's it for now…when we return: taxonomy, biases, & why is music so hard to talk about?

machines care if you listen

A lot has happened since my last entry. The questions, the conundrums…they grow. On Twitter we laugh, we talk, we use the emoji of a pensive face, raised brow, and a hand up to its chin (you know the one: 🤔) and we catch the eye of Amazon’s taxonomist (#nometadatanofuture).  But something that lets me know that I’m on the right track with all of this is the articles that keep coming out about machine learning, metadata, and music. 

Pitchfork’s article from last week, “How Smart Speakers Are Changing the Way We Listen to Music,” asks what happens when we remove the friction from listening:

Though any streaming platform user is deeply familiar with mood and activity-geared playlists, the frictionless domestic landscape of voice commanded speakers has led to a surge in such requests. “When people say, ‘Alexa, play me happy music,’ that’s something we never saw typed into our app, but we start to see happening a lot through the voice environment,” Redington explains.

The article goes on to talk about how music streaming services determine what “happy” is (metadata) and a brief discussion of how that happens (machine learning! human curation!) but what it doesn’t do is discuss what it means to have machines (and humans) decide for the rest of us what happy is or if/how it manifests itself in a song. I find the whole idea incredibly invasive, even more so now after doing algorithm testing.  

The first algorithm I tested was “tonal/atonal,” an algorithm supposedly designed to tell us if a song is…I don’t know. If the song has a key center? Perhaps. But this seems beyond unuseful as the majority of music would be classified in some way as “tonal.” In explaining this to my co-workers, I invoked the most typically musicological definition of atonality that I could, accompanied with a little Pierrot Lunaire. But NPR's music library is not solely comprised of Schoenberg and Berg. As I do not know what music was included in the original model for this algorithm, I have no idea on what it was trained. Regardless, it had a very hard time distinguishing what was bad singing versus what was the absence of a tonal center. But Machine Learning!, right? And of course, there was non-Western music in my dataset which raised no end of problems. My parameters were wide for NPR's sake but I couldn't help but think that the sheer existence of this algorithm highlights a human (Western) bias around how we supposedly listen.

This is all far away from the “happy” algorithm that Eric Harvey mentions in his Pitchfork article (note: I will be assessing that algorithm next week) but all of these things are interconnected. We are letting machines make “decisions” about what something is as if that could be determined outside of a few clear examples (BPM, meter, etc) and in doing so, it is reshaping the way we listen, whether we know it or not. I myself have smart speakers (three Echo Dots scattered throughout my house) but like in all other circumstances, my listening is curated and decided solely by myself, meaning no asking Alexa for “happy” music or something of that ilk (though that will be a fun experiment). This hearkens back to Anna Kijas’ keynote in my last post. At the moment, I’m writing about programmer bias and canon replication in streaming music: what happens when I ask Alexa to play music like Debussy or “radiant” classical music?  What you hear is no longer in your control; your listening experience has become frictionless for lack of a better term. I think, subconsciously, many classical music listeners feel this (without, possibly, knowing just what it is they do feel) because at the end of the day, classical music listeners are collectors and curators. However, I don’t think people see this lack of friction as a problem about which to be concerned. I do. Digital streaming is the way we are consuming music (I made a point to use the word “consuming,” both coming from capitalist terminology and from eating and devouring) and if we don’t answer these questions and address these issues now, they may become impossible to rectify. 


 

My next big project is addressing NPR’s classical music metadata and library organization system and that’s a doozy.  I already have a lot to say, but I will go into it more next time. ❋

Exspiravit ex machina

Getting this started has proved more difficult than initially envisioned, who knows why. I say this because I have been completely overtaken by this work and the questions that have arisen from it so, naturally, writing about it should be easy, right?

Right.

Let's start with a little background (or perhaps quite a bit of it), when I accepted this internship at NPR, I had no idea, truly, what I was in for. I've spent the last two years immersed in digital humanities and librarianship, realizing that this space was perfect for me. It was a weird combination of all the things I love, addressing the many and myriad questions I had about being a scholar in the future whatever that means, and it allowed me to focus on the things at which I'm really quite good: workflows, information management, metadata, academic technology, and so on. These were things that, over the years, I've noticed musicology as a discipline had little interest in (something that did not and still makes no sense to me personally) and this outlet was one I needed badly. So when I saw the call for interns and read the description, I knew it was something I needed to do. I had no idea just how much that decision would change my life.

Well that was dramatic.

My internship began and I learned about my daily tasks, things I was aware of such as ingesting new promotional music into our in-house database. After a few weeks of general intern training and practice working with our systems, I moved on to the real meat: music information retrieval (MIR) mood tagging. Now I had to do some research, you know, real scholarly stuff first. I read published papers by those foremost in the field such as J Stephen Downie and read the documentation provided by Essentia, the algorithm library we use to assess our tracks. Scintillating stuff that lead to the deepest of rabbit warrens. It was here that I learned about digital musicology, a term I had heard but with which I had not engaged. I am still learning quite a bit about it but what I have gleaned so far is that…there are not a lot of musicologists involved in digital musicology. That might sound odd to you, it sounded odd to me at first. Let's spend a little time on that, shall we?

Digital musicology along with music information retrieval touches on a number of various fields: music theory, acoustics, music cognition/perception, music psychology, programming, music science, library studies, and more. I suggest the Frontiers in Digital Humanities page linked above for more information. And to be fair, there are musicologists engaged in this work, asking really interesting questions. But when MIR is put to work in the real world, say, music streaming services, the specific tools that musicologists have as humanists are shelved in favor of the tools of theorists and programmers. 

Enter yours truly.

In the process of undertaking a massive assessment of several MIR algorithms, I found myself asking lots of epistemological questions — humanist questions — that seemed unanswered. What is tonality and how do we define it for an algorithm? What should be included in our training models? What biases are represented and replicated in our algorithms? Musicologist Anna Kijas touches on these things beautifully in her Medium post, taken from her keynote at the very recent Music Encoding Conference (at which my project supervisor was present) and I highly suggest reading it. I will get into all of the problems I have faced and am facing but this is already quite long for a blog post. (I know, it's my blog, I can do what I like. Point taken.) Plus, I feel it's only fair to give those problems the space they deserve.