About Me

I'm a human being who can't decide whether they're a music scholar or software developer. (Edit in April 2020: I figured it out, more to come later).

Owner of This Present Mind, working on behalf of Répertoire International de Littérature Musicale (RILM) and others.

Research Interests

Topics I'm interested in and have published or might publish about.

Multidimensional algorithms for music information retrieval.

Edit April 2020: This is still very cool, but it's no longer my thing.

I believe there is a fundamental disjunction between human and computer music analysis algorithms.

Humans (music theorists) develop rules-based classification systems to retrieve musical data, then invariably complicate the system with a wide range of intuitive features that are never systematically determined. This is effective for humans: given a handful of explicit rules, we're able to bootstrap our subconscious pattern-matching skills and learn how to classify musical features efficiently. A fully formalized music theory would either be useless because of its vagueness or because of its complexity.

Computers use machine learning to develop their own classification systems for MIR. Because scholars want to achieve high effectiveness ratings for their classification algorithms, the computers are given far fewer musical characteristics than are formalized for use by humans. Yet it should be the responsibility of computers to sift through a tremendous range of musical features, discovering the rules that humans use subconsciously.

It's not the fault of scholars that we're still at this basic level. MIR researchers face the same obstacles as music perception researchers: nobody knows how to make a profit from this knowledge yet. Give an arbitrary photo to Google, it will tell you what's in the photo, who is in the photo, where the photo was taken, and so on. I know it will be possible for a computer to classify formal function in music from an audio clip alone--as soon as it's profitable.

Computer-specific strategies to manipulate music documents.

Edit April 2020: This is still very cool, but it's no longer my thing.

In some repects, computers have revolutionized the way people manipulate and handle data. This is true for some musicians (like those who use LilyPond) but the majority of musicians are happy with the "pencil on paper" interactions provided by mainstream music notation applications. I can sympathize with the approach taken by these mainstream applications: it's easy to think of situations where change in a widely-used application has been met with hostility by a significant proportion of the application's users.

Understandable though it is, we're missing out! If we recontextualize musical scores as data, the vast array of data visualization strategies are unlocked for musicians. Common music notation is optimized for certain musical characteristics; if we choose different characteristics, we might plot timbral change over time, for example. And it doesn't end at visualization--we can feed data back into the music document, changing timbral balances in ways that would be tedious on a conventional score. This would be a tremendous advantage for composers and analysts alike.


Edit April 2020: This is still my thing.

The characteristics of experiences. We oureslves are subjects. We experience everything as subjects. Everything we experience is through subjectivity (that is, experienced as part of the process of being a subject). Everything we experience, everything we know, everything we think, is subjective. (Everything we know is known through the process of being a subject).

If we are unique people, we are unique subjects. If all our experiences happen through our own subjectivities, then our (perception of) reality is absolutely not shared with anyone else.

How can that be true? Humans certainly share experiences. If we're not literally sharing the same subjective experience, then what is the common element? How can people say they've listened to the same piece of music if the "piece of music" was created independently in each listener's head as a biological reaction to variations in air pressure? I have some partial answers, but I need to read more!

Other Interests

Topics I'm unlikely to publish about, but still interested in.

I live under the final approach for two runways at Canada's busiest airport; it's not that I "get to" look at airplanes, but that I'm constantly reminded about it! Airplanes are fascinating machines though, and I've learned a lot about engineering from the way that airplanes are designed and the way accidents are investigated.
Intercultural existence.
How a large portion of Toronto's population lives their daily lives.
Issues of translating concepts, and how it is that certain languages tend to be used for certain things. Who gets to control a language, why, and what they do with it. How dominant languages are rarely the "best" by any measure. Note that all these issues exist for both natural and programming languages.