Blog

More work in progress on RITE – pics

February 21, 2013
News
Rite

rite-reh02 script

From the second rehearsal for RITE: pianists Sinae Lee, Fionnuala Ward, Beth Jerem and Marlon Bordas Gonzalez trying out the fx units, and my script for the piece so far.

RITE for for four performers, four pianos, and four effects units: Friday March 8, 1930, Royal Conservatoire of Scotland. Also featuring works by Steve Reich and Vera Stanojevic, improvisations by Alistair MacDonald, Anto Pett, and Anne-Liis Poll.

Quartz Composer video sampler for WSWEK

January 27, 2013
News, Tech
Wswek

Here’s a screenshot of something I’ve been working on today for ‘Why Scotland, Why East Kilbride’:

wswek-samp01

Regular readers of this blog (ahem) will recall that this all started with a piece of music I heard in a dream, for a double rock band plus a big squad of french horns. Well, the double rock band is doable, but the french horns were going to be impractical for the gig.

So, I’m building a sort of video sampler. There will be video clips of each of the four chords the horns play, with four different variations of each chord. Midi notes will then be used to trigger a clip of the correct chord. This would have been easy to do using Jitter in Max… so I had to do it the hard way and try to build it in Quartz Composer!

I’m getting there, ish. The visual tools in QC are amazing, but the program structuring and logic is kind of Turing-machine basic. Like, to combine four numbers I had to use a combination of three OR gates, and to build a toggle switch it’s a counter then take the count modulo 2. And it’s crashy and buggy, and the documentation is rudimentary. And I’m going to have to edit the video super carefully, if the clips aren’t exactly 2400ms long then my programming will break.

But… it might just work.

...

First rehearsal for 'Why Scotland, Why East Kilbride'

January 19, 2013
News
Wswek

We’ve just had the first rehearsal for ‘Why Scotland, Why East Kilbride’, which is a new piece I’m putting together for Cryptic Nights, 4 & 5 July of this year. This piece is a fantasy: an reimagination of my own East Kilbride childhood, as expressed through the medium of my alter ego Edward ‘Teddy’ Edwards, unsung hero of early British electronica.

The musical starting point for this piece was a dream I had, where I heard a fragment of music for what seemed to be a double rock band plus an orchestral horn section. Tonight was the first jam session with the band I’ve put together. This is made up of people who don’t really do this sort of thing! I’ve got two ex students of mine, Aimée Laws and Nikki Donaldson (from Edinburgh College and the Conservatoire respectively) on drums and keyboards. My second drummer is legendary percussionst/composer Steve Forman, a colleague of mine from the Conservatoire, while on bass I’ve got Steve Ford, who I’m also collaborating with on a sci-art sonifiation project. Rounding out tonights lineup was Bill Whitmer, aka williwaw, a great ukulele player, whose solo improvisations to film loops is part of my inspiration for this project.

I played guitar for tonight, although in the final thing I thing I’ll probably be swapping between elecronic noise-makers and a video sampler, which I’m going to use to represent the horn players. So, we had four chords and… we jammed.

Interesting. When I was a teenager, the idea that one day I would play rhythm guitar in a space rock band seemed like some sort of unattainable dream. Now it seems all I need to do is book a rehearsal room and go for it. Musically, it’s a ridiculously easy thing to do. However, I do now have some work to do to structure the whole show: in the end, 25 minutes of semi-inept rock jamming just sounds like, er, 25 minutes of semi-inept rock jamming!

I have some ideas. I’ve tried putting the recording of our best jam together with the video, and it’s got something going for it. Not going to make that public for now, would spoil the surprise…

...

Work in progress - RITE

November 27, 2012
News
Rite

I’m working on three pieces of music at the moment. Well, that is, I’m supposed to be working on three pieces of music at the moment – today I actually managed to get some work done on two of them, at any rate.

The big project I have on the go is Why Scotland, Why East Kilbride, a music theatre piece for Cryptic Nights, CCA Glasgow 4 & 5 July next year. It’s for double rock band with video projection, including, I’ve just decided today, a ‘digital video mellotron’, which I’m going to use to deliver the 16 French horn parts needed for the piece. However, the work I’ve done on that piece today is boring and administrative…

More interestingly, I’ve turned this evening to a new work in progress called RITE, for four performers, four pianos, and four effects units. The centenary of the Rite of Spring is coming up next year, and I’ve been invited to write a piece, hence the deliberately-dodgy all caps title. To quote my current draft of the programme note:

The title of the piece is an acronym: it might also be spelled R.I.T.E. Perhaps it stands for this:

Recursive Invariant Transform (Electronic) = RITE

Or possibly:

RITE is terrible ecronym.

The four effects units are going to be used as feeding-back no-input noise-makers, operated by the pianists themselves. I’ve spent some happy hours surfing eBay for obsolete fx units and mixers: my latest purchase is a lovely old ART FXR Elite II which you can hear on the clip below. For the pianistic material, I’m mulling over those little bits of the piece which have always stuck in my ear, and kind of rewriting them as if from memory. Today I’ve been reworking the little scale for two flutes at the very end of the piece, just eighteen notes, which have already generated several minutes of material.

This clip is a kind of meditative recomposition of the initial two chords from the ‘Mystic Circles of the Young Girls’, combined with recursive fx stuff:

[audio src=“https://tedthetrumpet.files.wordpress.com/2008/09/rite_sketch_01.mp3”][/audio]

rite_sketch_01.mp3

...

Sonifying IR spectroscopy data – rhythm

July 20, 2012
News

For today’s exciting episode, I’m using the simplified spectrum data from glycine to generate a sound, and then looping round the same data to play the sound in a kind of ‘rhythm’:

[sourcecode]
(
~name = "glycine";
~path = Document.current.dir.asString++"/"++ ~name ++".csv";
f = CSVFileReader.readInterpret(~path);

f = ((f.flop[1] * -1) + 1).normalize;

f = (f*100).asInteger;
f = f.differentiate.removeEvery([0]).integrate;
f = f/100;

~peaksIndices = f.differentiate.sign.findAll([1,-1]);

g = Array.fill(f.size, 0);

~peaksIndices.do { |i| g[i] = f[i] }; // Daniel's line

~amps = g;

// [f,~amps].plot(~name, Rect(840,0,600,450));

~freqs = (36..128).resamp1(f.size).midicps;

SynthDef(\glycine, { | gate=1, amp |
	var env, sig;
	sig = Klank.ar(`[~freqs, ~amps, nil], PinkNoise.ar(amp/100));
	env = EnvGen.kr(Env.perc, gate, doneAction: 2);
	Out.ar(0, Pan2.ar(sig, 0, env))
}).add;

Pbind(	\instrument, \glycine,
		\amp, Pseq(~amps, 4).collect { |amp| if(amp > 0) {amp} {Rest}},
		\dur, 0.02,
).play;
)

[/sourcecode]

[audio src=“https://tedthetrumpet.files.wordpress.com/2008/09/glycinerhythm.mp3"][/audio]

glycinerhythm.mp3

In other news, my collaborator Steve has been having Ideas. Watch this space.

...

Sonifying IR spectroscopy data - finding peaks

July 19, 2012
News

Emperor Joseph II: Well, I mean occasionally it seems to have, how shall one say? [he stops in difficulty; turning to Orsini-Rosenberg] How shall one say, Director? Orsini-Rosenberg: Too many notes, Your Majesty? Emperor Joseph II: [to Mozart] Exactly. Very well put. Too many notes. From Amadeus (1984)

It occured to me after a while that my previous attempts at dealing with these sets of data were running into a ’too many notes’ problem: 784 resonators at once is always likely to sound like noise! What one would like to be able to do would be to focus in on the visible ‘peaks’:

This is something a human can do quite intuitively: in fact, I seem to dimly remember that, many years ago, when I worked with HPLC (High Performance Liquid Chromatography) data at Schweppes, there was a pencil and paper method we used to estimate the height and width of a peak, and thus determine the concentration of a compound by calculating the area under the graph.

A little research into the problem of doing this algorithmically rapidly took me far out of my mathematical depth:

Chao Yang, Zengyou He, Weichuan Yu Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis BMC Bioinformatics. 2009; 10: 4. Published online 2009 January 6. doi: 10.1186/1471-2105-10-4 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631518/

Instead, I got some hints on a rather simpler approach from Daniel Mayer by asking a question on the SuperCollider mailing list.

Here’s the code I eventually came up with:

( ~name = "glycine"; ~path = Document.current.dir.asString++"/"++ ~name ++".csv"; f = CSVFileReader.readInterpret(~path);

f = ((f.flop[1] * -1) + 1).normalize; // these three lines remove plateaus in the data, // otherwise we get missing 'peaks' in the analysis // some int/float voodoo needed here! f = (f*100).asInteger; f = f.differentiate.removeEvery([0]).integrate; f = f/100; // the magic line which finds the peaks ~peaksIndices = f.differentiate.sign.findAll([1,-1]); // putting the data back together again g = Array.fill(f.size, 0); ~peaksIndices.do { |i| g[i] = f[i] }; ~amps = g; [f,~amps].plot(~name, Rect(840,0,600,450)); ~freqs = (36..128).resamp1(f.size).midicps; ```{ Splay.ar(Klank.ar([~freqs, ~amps, nil], PinkNoise.ar(0.01))) }.play; )``

At the very end of the plot you can see one of the problems: this method finds any and all local peaks, including ones which to the eye look unimportant:

I think what would be needed here would be some low-pass filtering to get rid of small glitches. However, the musical results so far are quite good: once again, here’s a short gesture made by crossfading from one compound to another:

[soundcloud url=“http://api.soundcloud.com/tracks/52949967" iframe=“true” /]

...

Sonifying IR spectroscopy data - automating pitch

July 18, 2012
News

Going off in a bit of a different direction here, using the data as automation to drive the pitch of a synth:

( f = CSVFileReader.readInterpret(Document.current.dir.asString++"/water.csv");

f = ((f.flop[1] * -1) + 1).normalize(48,84); //midinotes

Pmono( \default, \midinote, Pseq(f, inf), \dur, 0.005).play )

In this recording, looping up the three chemicals one by one. Kind of cute - early days for this approach.

[audio src=“https://tedthetrumpet.files.wordpress.com/2008/09/pmono01.mp3"][/audio]

pmono01.mp3

Sonifying IR spectroscopy data - 'chords'

July 17, 2012
News

Finding the .resamp1 method in SuperCollider gave me an idea for reducing this rather large set of data into something perhaps more musically useful. Could I make something more like a tonal chord, with pitches repeated in every octave?

I first drastically resampled my data into just twelve points:

f = ((f.flop[1] * -1) + 1).resamp1(12);

These would then be the probabilities of those twelve pitch classes appearing across a range of eight and a half octaves:

f = (f++f++f++f++f++f++f++f++f[..7]); // 104 notes

Then what I did was to multiply this chordal structure by the original data, so that my final sound is the ‘glycine chord’ amplitude modulated (sort of) by the absorbtion data.

Here’s the final code:

( ~name = "glycine"; ~path = Document.current.dir.asString++"/"++ ~name ++".csv"; f = CSVFileReader.readInterpret(~path); g = f;

f = ((f.flop[1] * -1) + 1).resamp1(12); f = (f++f++f++f++f++f++f++f++f[..7]); // 104 notes g = ((g.flop[1] * -1) + 1).resamp1(104); // 104 samples of orig graph ~amps = f.cubed * g; // combining two approaches ~amps = ~amps.normalize; ~amps.plot(~name, Rect(840,0,600,450)); ~freqs = (25..128).midicps; ```{ Splay.ar(Klank.ar([~freqs, ~amps, nil], PinkNoise.ar(0.01))) }.play; )``

I used this approach to make the sound below, which crossfades from glycine to tyrosine to water, then back to glycine again.

[soundcloud url=“http://api.soundcloud.com/tracks/52370172" iframe=“true” /]

...

Sonifying IR spectroscopy data

July 16, 2012
News

I’m in the very early stages of a collaborative project with Dr Steven Ford, Senior Research Fellow and QC Manager at the Cancer Research UK Formulation Unit of Strathclyde University. Steve came to me with an idea about sonifying IR spectroscopy data, with a view to perhaps drawing some creative parallels between vibrations at the atomic scale and musical sound.

Steve sent me some IR data relating to three compounds, water, glycine and tyrosine, and I’ve been trying some things out in SuperCollider. Here’s a plot of the data which Steve sent me:

Thinking in terms of sound, my immediate thought was to try to scale those resonances into the audio region. Here’s one of my first attempts:

( ~name = "water"; ~path = Document.current.dir.asString++"/"++ ~name ++".csv"; f = CSVFileReader.readInterpret(~path);

~amps = f.flop[1]; // array of amplitudes ~amps.plot(~name, Rect(840,0,600,450));

~freqs = Array.series(f.size, 40, 100); // size, start, step

{ Klank.ar(`[~freqs, ~amps, nil], PinkNoise.ar(0.01)) }.play; )

[audio src=“https://tedthetrumpet.files.wordpress.com/2008/09/water01.mp3"][/audio]

water01.mp3

There are 784 points of data here, and I’ve just mapped those arbitrarily to a bank of 784 resonators, spaced 100 Hz apart, starting at 40Hz. It sounds pretty nasty. Then it occured to me that Steve’s data is for transmittance, not absorbance: the points of interest are the troughs, not the peaks, the graph is upside down for what I wanted to do. So:

( ~name = "tyrosine"; ~path = Document.current.dir.asString++"/"++ ~name ++".csv"; f = CSVFileReader.readInterpret(~path);

~amps = ((f.flop[1] * -1) + 1).cubed; // invert, massage ~amps.plot(~name, Rect(840,0,600,450));

~freqs = (64..128).resamp1(f.size).midicps;

{ Klank.ar(`[~freqs, ~amps, nil], PinkNoise.ar(0.01)) }.play; )

Here I was also starting to think about how to bring out the peaks in the data, hence the .cubed. This does make the data ‘pointier’, but at the expense of the smaller peaks. A slightly different strategy with the frequencies here also, 784 microtonal pitches between midi notes 64 and 128. It still sounds really pretty nasty:

[audio src=“https://tedthetrumpet.files.wordpress.com/2008/09/tyrosine01.mp3"][/audio]

tyrosine01.mp3

...

Gathering of the Gamelans - Day 2, afternoon

May 1, 2012
News

After lunch today we had a joint session, with three people presenting their experience of teaching gamelan in schools and universities. I am presently engaged on a project to attempt to establish a gamelan at Stevenson College Edinburgh (soon to be Edinburgh College), so this area is of particular interest to me.

Ruth Andrews runs a gamelan programme for 12-17 year olds at the International School of Amsterdam, which sounds like it would be a model to aim for. The gamelan is permanently installed in a large room, and all the students have gamelan activity in every year of their studies. Links are made between the cultural and historical perspectives offered by the gamelan and other areas of the curriculum. The gamelan seems to have become a central, shared experience for everyone at the school: a musical ensemble where everyone takes part on an equal level, with no soloists or stars.

Andy Gleadhill is a music officer in Bristol, where gamelan is firmly established as a whole-class music activity within the primary curriculum. There are 66 (!) specially designed sets of gamelan instruments in their schools. Andy presented some material from a particular project which combined Digital Audio Workstation technology – loops and recording in GarageBand – with gamelan, the students working to put together a hybrid version of a Black Eyed Peas song.

Maria Mendonça outlined a degung programme which she runs at Kenyon College in the States. I don’t fully understand the American system, but this seems to be one-semester class which can be taken by either music students or students on other degrees. She works largely by ear rather than using notation, and spoke very positively about the level playing field which can be established between trained and untrained musicians. She described a teaching model where the students have a two-hour class, followed by an hour to work as a group on their own. She seems to use peer teaching extensively, on the very simple model of having students swap around and teach each other the parts: a practice I have also employed.

This was an informative and inspirational setting. Particularly impressive was the roster of visiting artists which Maria had managed to get, which included, wait for it, Euis Komariah, Nano S and Balawan! (Ok, perhaps these names may not mean much to non-gamelan specialists… I was about as impressed as I would have been if she said she had got Ella Fitzgerald, Duke Ellington, and Charlie Christian :)

Later in the afternoon, Charlotte Pugh and John Jacobs gathered a large group of us together for a session billed as a gamelan ‘improvisation’ workshop, although it seems to me it was more like a session on group devising. Charlotte played to us the outline of a medium length phrase in slendro, which we kind of copied back as a group, or in small teams, in an approximate fashion. John then gave us a similar phrase on the pelog instruments which we also worked on in small teams, and then both pieces of material were combined. A very chaotic sound resulted, but with some definite sense of shape. A short discussion ensued as to what we might then do if this was the first week of an eight-week project. Would the work crystalise into a composition, or would one seek to open up the process to a much more free and unpredictable form of improvisation?

An interesting point of practice which I will file away for future use: they had a couple of mics up, and a dedicated engineer who was able to play back to us instantly the short chunk we had just put together. This instant feedback was a great way for the group to be aware of the whole picture, not just what each subgroup was doing.

...