Programme

9.00 – 9.20   Registration

9.20 – 9.30   Welcome

9.30 – 10.30   Paper session 1: Computers as music composers

10.30 – 11.00   Coffee

11.00 – 12.30   Workshop session 1: Systems for enhancing human composition

12.30 – 14.00   Lunch (several eateries are available in the vicinity of the university)

14.00 – 15.00   Paper session 2: Computer simulation of societies of music agents

15.00 – 16.00   Keynote lecture

16.00 – 16.30   Coffee

16.30 – 17.15   Workshop session 2: Computers as creative learning/analysis tools

17.15 – 18.00   Poster session

18.00   Conference dinner 

 


 

Abstracts

On Computer-Aided Musical Creativity. Eduardo Miranda

Every now and then I find myself with an earworm stuck in my mind’s ear. Sometimes these are recognizable tunes, rhythms, timbres, sound effects or orchestral passages from music that I have heard before. However, I often am unable to clearly identify them. It seems that most of my earworms are distortions of pieces of music that I might have heard before or evoke only particular aspects of them.

Never mind that Mozart might have been able to write down entire pieces of music from memory after hearing them only once. The truth is that I probably am rather incompetent at retrieving music from my memory accurately. Is this an anomaly of my brain? Or is it something that a composer’s mind is supposed to?

I am convinced that the means by which these earworms emerge in my mind’s ear are manifestations of some form of musical creativity. And so I have been developing my own ways to harness such creative process. An approach that has proven to be rather effective is to program computers to generate musical materials for my pieces. My interaction with these materials flushes all sorts of earworms out of my brain. These often evolve and mingle with new musical ideas, transformations, variations, and so forth. In this sense, the computer is a great tool for my creative process because I can program it to generate me shedloads of musical materials to interact with.

Mind Pieces is the first large-scale symphonic composition where I made extensive use of computer-generated materials to compose entire movements. This talk will introduce some of the algorithms that generated musical materials for Mind Pieces and demonstrate how I engaged with these materials to compose it.

 

Artificial Worlds and the Simulation of Music Evolution. Marcelo Gimenes

In our research, music is seen as a process that results from the (inter)actions of various agents – musicians, listeners, etc. – and, therefore, an adaptive dynamic system, for which computer modelling and simulations are specially useful. As a natural corollary, for the study of the emergence and evolution of music styles, the artificial agent paradigm is helpful as, most notably, agent-based systems are suitable to simulate social interactions. In addition to that, agents can represent units of operation around which a myriad of issues can be observed, such as perceptive and cognitive models, the ability to communicate, to make decisions and to take actions in order to achieve specific goals.

This paper introduces the overall architecture of CoMA (Autonomous Musical Communities), an agent-based interactive music system designed to study music evolution in virtual environments. The musical style of a particular agent is defined as a complex object (memory) that stores a set of musical patterns (defined by parameters such as information related to pitch, rhythm, etc., and the relationships between them) and corresponds to the musical knowledge that an agent accumulates during its lifetime, according to its musical ontogenesis. Agents evolve their musical styles from the pieces with which they interact.

In addition to that, CoMA’s five global models (decision making, perceptive, cognitive, generative and life-cycle) and two general principles (independence of execution and autonomy of the agents) will be described. Independence of execution means that the system can be run without the need of any user intervention. Autonomy, on the other hand, refers to the fact that the agents have the ability to make motivated decisions.

 

Computer Simulation of Musical Evolution: A Lesson from Whales. Steven Jan

Simulating musical creativity using computers needs more than the ability to devise elegant computational implementations of sophisticated algorithms. It requires, firstly, a definition of what might be regarded as music; and, secondly, an understanding of the nature of music – including its recursive-hierarchic struc-ture and the mechanisms by which it is transmitted within cultural groups. To understand these issues it is fruitful to compare human music with analogous phenomena in other areas of the animal kingdom. Whalesong, specifically that of the humpback (Megaptera novaeangeliae), possesses many similarities to human music (and, indeed, to birdsong). Using a memetic perspective, this paper compares the “music” of humpbacks with that of humans, and aims to identify a number of shared characteristics. These commonalities appear to arise partly from certain constraints of perception and cognition (and thus they determine an aspect of the environment within which the musemes (musical memes) constituting whale and human music is replicated), and partly from the social nature of musemic transmission. The paper argues that Universal-Darwinian forces give rise to commonalities of structure in phenomena we might regard as “music”, irrespective of the animal group – primates, cetaceans, birds – within which it occurs. The paper concludes by considering the extent to which whalesong might be regarded as creative, by invoking certain criteria used to determine this attribute in human music.

 

Generating Time: Rhythmic Perception, Prediction and Production with Recurrent Neural Networks. Andrew Lambert, Tillman Weyde and Newton Armstrong

In the quest for a convincing musical agent that performs in real time alongside human performers, the issues surrounding expressively timed rhythm must be addressed. Automatically following rhythm by beat tracking is by no means a solved problem, especially when dealing with varying tempo and expressive timing. When generating rhythms, existing interactive systems have ignored the pulse entirely, or fixed a tempo after some time spent listening to input. We take the view that time is a generative property to be utilised by the system.

This paper presents a connectionist machine learning approach to expressive rhythm generation, based on cognitive and neurological models. We detail a multi-layered recurrent neural network combining two complementary network models as hidden layers within one system.

The first layer is a Gradient Frequency Neural Network (GFNN), a network of nonlinear oscillators which acts as an entraining and learning resonant filter to an audio signal. GFNNs model the perception of metrical structures in the stimulus by resonating nonlinearly to the inherent periodicities within the signal, adding frequency information, and creating a hierarchy of strong and weak periods.

The GFNN resonances are then used as inputs to a second layer, a Long Short-term Memory Recurrent Neural Network (LSTM). The LSTM learns the long-term temporal structures present in the GFNNs output, the metrical structure implicit within it. From these inferences, the LSTM predicts when the next rhythmic event is likely to occur.

We train the system on a dataset selected for its expressive timing qualities and evaluate the system on its ability to predict rhythmic events. These predictions can be used to produce new rhythms, forming a generative model. As such, the GFNN-LSTM model has the potential to be used in a real-time interactive system, following and generating expressive rhythmic structures.

 

The Infinite Jazz Ballad. Tom Parkinson

Whilst writing a collection of jazz studies, it occurred to me that I was following a set of normative and limited harmonic rules that could potentially be encoded. In order to codify these norms, I analysed hundred of jazz standards (that could be performed as rubato ballads). The data collected was limited to the harmonic relationship between one chord and the next: that is, what the two chords are and the distance between them.

From this, I have made generative algorithms that work probabilistically to create a potentially infinite chain of chordal pairings with each chord, essentially, building a new and hermetic harmonic relationship with the next. It is my objective to create, as simply as possible, computer-generated jazz. Slow-moving, tempo-less music appears to be the most convincing approach. Faster music in a steady tempo would require longer chains of harmonic relationships as well as the addressing of other formal, phrasing, orchestrational and rhythmical elements, thus requiring significantly (prohibitively) more complex data and coding to achieve perceptual generic credibility.

In keeping with the desire for reductive simplicity, I have restricted the orchestration to broken piano chords plus occasional (ostensibly randomly timed) soft hits of a riveted ride cymbal and single breathy trumpet notes. These elements are sampled and then selected via super collider.

Much of the experimentation with jazz and algorithmic composition to date has been focussed on expanding harmonic, metric, rhythmic and melodic language and/or providing an enhanced improvisatory context (George Lewis, for example). With this project, my intention is to experimentally create music that sounds instantly familiar.

 

Music Learning and Production with Long Short-Term Memory Networks. Florian Colombo, Alex Seeholzer and Wulfram Gerstner

At the base of creativity lies the experience of the artist, as the repertoire from which he draws inspiration for his creation. The artists brain in turn serves as the machinery that both stores prior experience and leads to the creation of novelty. In the context of musical creation, a large part of prior experience can be seen as the musical pieces “known” to the composer – stored in his memory as the connections of the neural networks that are his brain.

Here we investigate how a simplified neural network model for music learning can be coaxed into the creation of musical pieces, drawing on a repertoire of previously learned musical sequences. We employ the long short-term memory (LSTM) network architecture, that has been shown to be particularly efficient in learning complex sequential activations with long time lags, properties inherent to sequences of musical notes. To be able to learn and replay a larger body of musical patterns in LSTM networks, we devised a multi-layer approach, effectively separating the slow time scales of musical production into a second network which “directs” the activity of the faster network. This allows the learning and autonomous reproduction of a large body of musical sequences, which we demonstrate on extracts of the Bach cello suites.

Musical creativity is introduced in our network by coaxing the slow “director” network into outputs that are unfamiliar to the fast LSTM network. This leads to explorations of the musical sequence space and the creation of novel musical pieces, which still fall into the space of familiar production rules. The proposed architecture can thus subserve the creative exploration of a learnt body of musical knowledge. We are currently investigating the capacity of such networks for storage, retrieval and creation of larger musical bodies, including across several musical styles.

 

‘Multidimensional Interstice’ as Compositional ‘Form’: Exploring the Interactive Compositional Procedure in Interstice. Alannah Halay

In this presentation, I discuss my current composition Interstice, the ‘form’ of which relies on the participation of an interactive audience via the internet and an iPhone ‘app’. Due to a specific type of interactivity, Interstice’s timeline is complex, and its participatory element results in the roles of ‘performer’ and ‘audience’ being indistinguishable. Interstice treats the audience as additional compositional ‘objects’ amongst which perceived ‘meaning’ and ‘function’ interact within a ‘frame’ that is multidimensional and not definitive. As such, my composition allows the chain of stages in a typical compositional process to be radically rearranged.

Whilst demonstrating how Interstice works, I discuss that, for me, the nature of compositional ‘form’ is multidimensional and interstitial. It comprises a continuously shifting definition where each characterisation appears inherent regardless of this constantly transforming quality. A potential reason for this versatile behaviour is most likely due to the multidimensional nature of ‘time’ which I explain includes simultaneous linear and nonlinear characteristics.

Another reason is possibly because of the various perspectives applied by society. I elaborate on this theory by questioning the substance of compositional ‘form’ and demonstrating how answers to this query are governed by the perceivers’ viewpoints and the composition’s social context. I explain that compositional ‘form’ is a communal event that most likely relies on the formation of ‘meaning’ within the ‘space’ between a composition’s constituent ‘objects’ and its perceivers. In my opinion, this establishment of ‘meaning’ is governed by the way the compositional ‘form’, and its occupied ‘space’, is ‘framed’ within its social setting; however, in the case of Interstice, defining this ‘space’ and ‘frame’ is not straightforward.

 

Color Tab An Educational Mobile Music Application. Michelle Kirk

The name of my app is Color Tab. It is a new method of tab for the guitar. I selected to develop an educational application as the educational process is undergoing a significant change and mobile devices are becoming more useful. Where the teacher was once the main source of information on the subject being taught, now there are many sources available to students with which to learn from. Discovering new ways to implement technology into instruction in order to keep up with the demand for knowledge by students is one way educators can utilize mobile devices, such as cell phones, and tablets by including the use of applications into the educational process.

The categories I selected to focus my application on are music and education. My focus is on the area of the guitar specifically. The application I created focuses on a form of sheet music called tablature. Tablature is basically a simplified version of sheet music for guitar. As a guitarist, I am familiar with tab, and the inability to include the duration for notes. The new method of tab I developed for the guitar called Color Tab, through the use of technology in developing the application, allowed me to design and implement the use of a color coordinating system that offers musicians the ability to take the tablature writing a step further and be able to create a rhythm or duration for notes with tablature that is nonexistent with the current system. Version 1.0 of the application was formatted for a classical or six string guitar. Version 2.0 includes an additional compose area for a bass or four string guitar. There are plans for future versions to include additional compose areas or the development of additional applications for other string instruments that can also use Color Tab such as the banjo, ukulele, and the sitar as well as the 7, 8, 9, 10, 11, 12, 13, 18, string guitars. The variation will be in the number of lines in the music tablature.

As an application, Color Tab, can be a useful tool in the educational process both inside and outside the classroom. As a mobile application it is not limited to traditional educational settings. Instructors may be able to implement it in situations where there is limited time to teach students how to play the guitar. Online students may be able to learn how to not only play with this new method, but write their own compositions. It may even become a mainstream method, which would be the ultimate outcome for the Color Tab educational mobile music application.

 

Batera: Drummer Agent with Style Learning and Interpolation. Rafael Valle and Adrian Freed

Ubiquitous in most DAWs and available in online versions, programmable drum step sequencers have been around since 1972, when Eko released the ComputeRhythm. In our previous research at CNMAT, these drum step sequencers have been expanded to include onset probabilities[1] and rhythmic expressivity [2].

In this work, we address the challenge of learning a drum agent that is capable of learning styles and interpolating between them during performance, taking into account rhythmic expressivity, musical sections and phrases and drum patterns.

We describe a hierarchical solution that learns, from MIDI drum track and annotated data, a drum agent in the format of three probabilistic finite state automata(PFSA). In decreasing hierarchy :

  • The first automata operates on the sections of the piece;
  • The second automata operates on the phrases and licks;
  • The third automata learns the drum patterns, including the drum pieces used and their respective onset probabilities given time-step and actuator. Each newly learned drum pattern is used to update or create new patterns, dependent on the angular similarity between new and existing patterns.

Based on our Control Improvisation approach [3], we design specifications to guarantee that the improvisation and learning process satisfy some desirable properties. For example, from a performer perspective, it is extremely unlikely that a kick drum will be played with the hands or that multiple sequential hits on the same piece will be done with one hand only.

During playback, we interpolate between drum agents by projecting them onto a 2d space using CNMAT’s Radial Basis Function Interpolator, which outputs weights for each drum agent’s PFSA. Sound is generated by reading drum samples and cross-synthesis is used to provide an interpolation between the instruments of each style. We provide examples in which we learn and interpolate between drum kit and tabla examples from pieces commonly associated with rock and and hindustani music.

[1] Vijay Iyer, Jeff Bilmes, Matt Wright, and David Wessel. A novel representation for rhythmic structure. In Proceedings of the 23rd International Computer Music Conference, pages 97–100. International Computer Music Association, 1997.

[2] Yotam Mann, Jeff Lubow, and Adrian Freed. The tactus: a tangible, rhythmic grid interface using found-objects. NIME 2009, 2009.

[3] Alexandre Donzé, Rafael Valle, Ilge Akkaya, Sophie Libkind, Sanjit A. Seshia, and David Wessel. 2014. Machine Improvisation with Formal Specifications. In Proceedings of the 40th International Computer Music Conference (ICMC).

 

PopSketcher: a Framework for Generating Sketches of Pop Songs. Valerio Velardo and Mauro Vallati

Automatic music composition is a fascinating subfield of computational creativity which deals with the design and development of creative systems able to generate music. The vast majority of systems developed so far focuses on small subtasks of music composition, such as monophonic melody generation, harmonisation of melodic lines, accompaniment and orchestration. On the other hand, systems capable of creating complete polyphonic compositions are rare. Two exceptions are IAMUS — developed at the University of Malaga — and EMI, designed by David Cope. The former composes contemporary music for ensemble and orchestra, while the latter generates classical pieces in the style of a given composer. Unfortunately, details of the approaches exploited by these two systems are unknown. Moreover, EMI and IAMUS are both focused on art music. Indeed, the generation of fully-shaped pop songs is underrepresented in musical metacreation.

In this work, we fill this gap by proposing PopSketcher, a framework designed for simulating a human approach to sketch pop songs. PopSketcher generates the basic constituents of pop songs: i.e., form, harmony, leading melody. Fundamental song sections (e.g., chorus, verse, bridge) are organised according to a 3-level hierarchical structure and are used to generate harmonic and melodic structures. The use of mixed top-down (form) and bottom-up (harmony and melody) strategies guarantees the overall musical coherence of songs produced, which is usually missing in automatically generated music.

PopSketcher represents one of the first steps towards the emergent notion of creativity as a service. The proposed system can help songwriters in the compositional process by suggesting novel musical ideas. At the same time, PopSketcher can serve as the basis for a complete pop song generator, which could provide people with a continuous flow of original music, overcoming the issue of resorting to traditional distribution channels like record labels.

 

A Novel Music Constraint Programming System: the PWGL Libraries Cluster Engine and Cluster Rules. Torsten Anders and Örjan Sandred

Background

In computer-aided composition, composers delegate subtasks of their composition process to software that creates music. Typically, composers develop their own composition software, so that they can closely control the resulting music.

Some composers are experienced software developers used to standard tools of the trade, but many are not. Visual languages are therefore widely used in computer music: they offer a low floor (users easily get started), but also a reasonably high ceiling (highly complex programs are possible).

The programming paradigm of constraint programming has been very attractive since decades for modelling music composition, because formally it is similar to traditional music theory. Users of constraint-based composition systems simply describe conditions for the rhythm, melody, harmony and so forth of musical results by modular and declarative constraints, and a solver automatically finds music that meets all constraints. Many music constraint systems have been developed: [1] provides a comprehensive overview in the context of other artificial intelligence methods, [2] surveys the field in detail.

Different systems have been designed for different purposes. Some systems are easy to learn, but support only a highly limited range of music constraint problems. Such systems either generate only sequences of events, for example, plain sequences of pitches, chords or note values (e.g., the systems Situation, and OMClounds), or can
only search for pitches of a pre-composed rhythmic structure (e.g., Score-PMC). Other systems are very flexible in the range of music theories they support, but are designed for experienced computer programmers, which excludes many composers (e.g., Strasheela).

Contribution

This workshop demonstrates a music constraint system that offers a user-friendly visual programming interface suitable for rapid development, and at the same time allows for a large range of constraint problems, including complex polyphonic problems. This system consists of the two PWGL libraries Cluster Engine and Cluster Rules.

The system offers a high ceiling. The solver Cluster Engine is similar to PWMC [3], but Cluster Engine solves complex polyphonic problems far more efficiently  then PWMC, in particular problems where both the temporal and pitch structure are constrained. The improved efficiency makes many more advanced problems solvable
in practice.

Integrated in a visual programming environment, the system offers a low floor for the novice. Cluster Rules further lowers the floor: it provides a library of ready-made compositional constraints for Cluster Engine for controlling rhythm, melody, harmony and counterpoint.

The system is stylistically neutral. Beginners can customise the constraints of Cluster Rules by arguments, and freely combine them in their own music theories. More experienced users can define their own constraints by visual programming with Cluster Engine. Power-users can also use textual programming for more concise definitions.

This workshop introduces music constraint programming in general with hands-on examples, and highlights what Cluster Engine and Cluster Rules bring to the field.

References

[1] J. D. Fernádez and F. Vico, “AI Methods in Algorithmic Composition: A Comprehensive Survey,” Journal of Artificial Intelligence Research, vol. 48, pp. 513-582, 2013.
[2] T. Anders and E. R. Miranda, “Constraint Programming Systems for Modeling Music Theories and Composition,” ACM Computing Surveys, vol. 43, no. 4, pp. 30:1-30:38, 2011, article 30.
[3] Sandred, “PWMC, a Constraint-Solving System for Generating Music Scores,” Computer Music Journal, vol. 34, no. 2, pp. 8-24, Jun. 2010.

 

Virtual Score Construction with the Abjad API for Formalized Score Control. Trevor Bača, Josiah Oberholtzer and Jeffrey Treviño

Abjad (www.projectabjad.org) is a Python API designed to let users model, visualize and interrogate the elements of music notation as classes, factories and other object-oriented constructs. Abjad is neither a stand-alone application nor a programming language but instead an object-oriented extension to the widely-used Python programming language. Abjad equips Python with a computational model of music notation that makes it possible for users to virtualize the work of building sketches, compositional materials and complete scores; the goal of the system is a heuristic interactivity between musical thought, formal model and publication-quality music notation.

The 45-minute workshop proposed here will take participants through three different real-world uses of Abjad. The workshop will first explain what Abjad is, how Abjad is installed and how participants simulate notes, rests, chords, tuplets and the other elements of music notation in an object-oriented way. The workshop will then demonstrate a type of bottom-up construction possible with Abjad that allows for programming basics like iteration, encapsulation and interrogation to be used in a musically meaningful way. The workshop will culminate in an example of rhythmic construction in Abjad, designed to introduce users to a powerful way of making a virtual model of the timing of complex musical events. The workshop will end with a question-and-answer period designed specifically for participants to ask questions about the ways that elements of their own research might benefit from modeling and exploration in Abjad. At the conclusion of the workshop, participants will have a solid understanding of what Abjad is, where to get it, how to install it, and how to create simple and aggregate objects, all visualized as music notation.

A draft of a forthcoming article on Abjad accepted for presentation at TENOR2015 (at IRCAM) is available here:
https://github.com/Abjad/tenor2015/blob/master/abjad.pdf

 

An interactive simulation of John Chowning’s creative environment for the composition of Stria (1977). Michael Clarke, Frédéric Dufeu and Peter Manning

The TaCEM project (Technology and Creativity in Electroacoustic Music), funded by the AHRC for a duration of 30 months (2012-2015) and based at the University of Huddersfield and Durham University, investigates the relationship between technological innovation and new creative potential for composers. Alongside a contextual and historical research and musicological analysis, a significant part of the project seeks to recreate in software the digital environments with which pioneering composers of the field created their works. This software, which will be freely available, enables its users to engage aurally with the techniques originally employed by the composers and enhances the understanding of their creative potential and their relationship with the aesthetic orientations that eventually led to the actual musical work.

In this workshop, we present our approach in the context of one of our eight case studies: John Chowning’s Stria (1977). This work for quadraphonic tape was created with frequency modulation digital synthesis, a technique invented by Chowning himself, and reverberation in Max Mathews’ Music 10 language, driven with parameters generated with an algorithm written by Chowning in SAIL (Stanford Artificial Intelligence Language). On the basis of several sources, including some provided by the composer, we have been able to recreate the synthesis engine and the algorithm used to generate the whole set of compositional data and sound materials for the 33 events building up the piece. By simulating Stria’s entire composing environment, our interactive software enables its users to experiment with the time, pitch and spatialisation parameters of the algorithm in order to build her/his own musical events and to evaluate both the possible creative outcomes of such a compositional system and the relevance of Chowning’s decisions in creating a work that appears today as emblematic of the electroacoustic repertoire.