Blog

Recent livecoding in SuperCollider

January 3, 2017
All, Tech

Over the winter break I’ve been spending some time working on my livecoding/algorave setup in SuperCollider. Here’s a quick practice run, this is how things are going at the moment.

https://youtu.be/nk58NBtMFvE

The most recent idea here is the \warp synth, a granulator slowly reading through a choice of soundfiles. In this particular run, I think the .choose threw up a fragment of a Stokowski Bach transcription https://archive.org/details/J.S.BACH-OrchestralTranscriptions-NEWTRANSFER and perhaps a bit of the theme tune from The IT Crowd as well. A nice background wash of sound behind the rhythmic stuff. For the latter, the samples in the first half of the video are various from here http://machines.hyperreal.org/manufacturers/ and in the second half of the run, after I exectue ~changesamples, from here http://theremin.music.uiowa.edu/MIS.html and here http://www.philharmonia.co.uk/explore/sound_samples.

The synths I’m using and my initialisation file is up on GitHub at https://github.com/tedthetrumpet/supercollider.

...

Rave the Space

October 6, 2016
News, Tech

Last night I gave a performance called ‘Rave the Space’ at Stereo in Glasgow, part of a series of events called INTER run by Iain Findlay-Walsh ‘creating a focused, public listening context for deep experiments in / with sound’.

My proposal was to ‘perform the soundscape of the venue through the medium of livecoding’. What I did was to visit the venue the day before, at a quiet time, and make some recordings – a fairly typical basement club/rock venue, so I was able to wander onstage, through dressing rooms, behind the bar, into the toilets etc, all the while recording both the ambience and, in some cases, tapping or hitting objects of particular interest – there was a group of CO2 cylinders that were particularly nice.

On the morning of the event, I roughly levelled these recordings, discarded uninteresting ones, cut out handling noise, mobile phone interference, and initial and terminal clicks from the recorder. This left me with nine recordings, each about a minute long.

I decided to challenge myself by doing as little rehearsal for the performance as possible. I had one synth precoded, a slicing sampler. All this does is to take an audio file and play back one of n slices: in the case of these roughly 60 second long files, I used n=64. On previous occasions when I’ve done livecoding/algorave with found sounds, I’ve gone through the source audio files carefully in Audacity, looking for particular short sounds that I can then isolate and shape into something resembling a drum hit, then performed with those sounds in place of drum sounds.

The slicing approach used here is deliberately less controlled: it’s a matter of luck what sound falls where as the point at which the file is sliced is quite arbitrary, perhaps falling just on ambience, or half-way through a percussive noise.

The performance was only to be ten minutes: which is not a lot on my timescale of livecoding in SuperCollider! I decided to start with a blank screen: in retrospect, I could have got to better musical gestures faster if I’d had maybe ten or a dozen lines precoded. Nevertheless, in this sit-down and concentrate atmosphere, the blank-page start was quite intruiging for the audience, I think.

The performance mostly went well, although there was one of those moments where I had what looked like a correctly typed line that evaluated correctly, that did not seem to be doing anything! I still can’t figure out what I was doing wrong.

I’m pleased with this idea and intend to repeat it, particuarly the site-specific approach to gathering sounds.

Here’s the code, not much to see here:

(//setup s.waitForBoot{}; SynthDef(\sl, { |out, gate=1, buf, sig, slices=16, slice=0, freq = 261.6255653006, amp=0.1| var myenv, env, start, len, basefreq = 60.midicps, rate; rate = freq / basefreq; len = BufFrames.kr(buf); start = (len / slices * slice); myenv = Env.asr(attackTime: 0.01, sustainLevel: 1, releaseTime: 0.1); sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * rate, startPos: start); env = EnvGen.kr(myenv, gate, doneAction: 2); Out.ar(out, sig * env * amp) }).add; t = TempoClock(140/60).permanent_(true); u = TempoClock(140/60 * 2/3).permanent_(true); Pbindef.defaultQuant_(4); Pdefn.defaultQuant_(4); ) ( ~paths = [ "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/bar.aiff", // 0 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/c02ambience.aiff", // 1 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/cafe.aiff", // 2 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/co2.aiff", // 3 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/corner.aiff", // 4 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/lane.aiff", // 5 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/seatingbank.aiff", // 5 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/space.aiff", // 6 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/stage.aiff", // 7 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/stairs1.aiff", // 8 "/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/stairs2.aiff" // 9 ] ) ~thebuf = Buffer.read(s, ~paths[7]); ~thebuf.play // Pbindef(\x, \instrument, \sl, \buf, ~thebuf, \slices, 64) Pbindef(\x).play(t) Pbindef(\x, \slice, 0) Pbindef(\x, \slice, 64.rand) Pbindef(\x, \slice, Pwhite(0,63,inf)) Pbindef(\x, \legato, 1/4) Pbindef(\x, \dur, 1/4) Pbindef(\x, \note, Pwhite(0,12,inf)) // Pbindef(\y, \instrument, \sl, \buf, ~thebuf, \slices, 64) Pbindef(\y).play(u) Pbindef(\y, \slice, 0) Pbindef(\y, \slice, 64.rand) Pbindef(\y, \legato, 4) Pbindef(\y, \dur, 1/2) Pbindef(\y, \note, Pwhite(-12,12,inf)) t.sched(t.timeToNextBeat(4), {u.sync(120/60, 10)});

...

The Next Station – ‘if only I had’

August 22, 2016

Tomorrow sees the launch of The Next Station, a project by Cities and Memory to reimagine the sounds of the London Underground. My contribution to this project is an audio work called if only I had, constructed entirely from a 3'42 recording of a train arriving and departing from Pimlico station.

The title is taken from Spike Milligan’s ‘Adolf Hitler: My Part in his Downfall’:

‘Edgington and I promenaded the decks. Harry stopped: “If only I had a tube.” “Why?” “It’s quicker by tube.”

… an inconsequential pun that has, for some reason, always stuck in mind!

I made this piece as a personal study into the possibility of using livecoding techniques in SuperCollider to develop a fixed piece. In recent months I have been very active in exploring coding in this way, particularly in the context of algorave: if only I had leverages these techinques. Here’s some of the code I used, with explanation:

( s.waitForBoot{ Pdef.all.clear; Pbindef.defaultQuant = 4; t = TempoClock.new.tempo_(120/60).permanent_(true); ~path = "/Users/jsimon/Music/SuperCollider Recordings/pimlicoloops/";

This is a remnant of what turned out to be a bit of a false start to the project. My initial idea was to look through the file for shortish sections, in the region of 2-3 seconds long that, when looped, had some sort of rhythmic interest. This was done offline, using Audacity. I thought it might be interesting to develop the piece by using these fragments almost in the manner of drum loops, and wrote some code to juxatpose them in various ways at different tempi. This didn’t really produce anything very effective however: the material is rather dense and noisy, and when looped together the rhythmic interested was lost in broadband mush of sound.

Instead, I revisited a synth from an earlier project that slices a buffer into 16 pieces for playback:

~bufs = (~path ++ "*.aiff").pathMatch.collect({ |i|  Buffer.read(s, i)}); SynthDef(\slbuf, { |out, buf, slices=16, slice=16, freq=440, sustain=0.8| var myenv, env, start, len, basefreq = 440, rate, sig, sus; rate = freq / basefreq; len = BufFrames.kr(buf); start = (len / slices * slice); sus = BufDur.kr(buf)/16 * sustain * 1.1; myenv = Env.linen(attackTime: 0.01, sustainTime: sus, releaseTime: 0.1); sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * rate, startPos: start, loop: 1); env = EnvGen.kr(myenv, 1, doneAction: 2); Out.ar(out, sig * env) }).add;

As well as experimenting with reverb, I also had a delay effect in here at one point. Again, the nature of the already fairly resonant material meant that this was not that useful. In the end, I only used the reverb at the very end of the piece as a closing gesture.

~rbus = Bus.audio(s, 2); SynthDef(\verb, {|out = 0, room = 1, mix = 1| var sig = FreeVerb.ar(In.ar(~rbus, 2), room:room, mix:mix); Out.ar(out, sig) }).add; s.sync; Synth(\verb);

At some point in developing the project, it occured to me to try playing together the sliced material with the orignal file. This seemed to effective, and gave me a clear trajectory for the work: I decided that the finished piece would be the same pop-song length as the original recording. In experimenting with this approach – playing sliced loops in SC at the same time as playing back the whole file in Audacity – I found myself gently fading the original in and out. This is modelled in the synth below: I used an explicit random seed together with interpolated low frequency noise to produce a replicable gesture:

~file = "/Users/jsimon/Documents/ Simon's music/pimlico the next station/Pimlico 140516.wav"; ~pimbuf = Buffer.read(s, ~file); s.sync; SynthDef(\pim, { |out=0, start=0, amp = 1| var sig, startframe, env; startframe = start * 44100; RandSeed.ir(1,0); env = EnvGen.kr(Env.linen(sustainTime: ~pimbuf.duration - 9, releaseTime:9)); sig = PlayBuf.ar(2, ~pimbuf, startPos:startframe, doneAction:2) * LFNoise1.kr(1/5).range(0.05, 1.0); Out.ar(out, sig * amp * env); }).add;

There was a nice moment in the original where the accelerating electronic motors of the departing train created a seried of overlapping upward glissandi, sounding very like Shepard tones, or rather, the sliding Risset variation. Looking to enhance this gesture, I tried a couple of my own hacks before giving up and turning to a nice class from Alberto de Campo’s adclib:

~shep = { var slope = Line.kr(0.1, 0.2, 60); var shift = Line.kr(-1,2,60); var b = ~bufs[8]; var intvs, amps; var env = EnvGen.kr(Env.linen(sustainTime:53, releaseTime:7),1,doneAction:2); #intvs, amps = Shepard.kr(5, slope, 12, shift); (PlayBuf.ar(b.numChannels, b, intvs.midiratio, loop: 1, startPos:3*44100) * amps).sum * 0.2 }; s.sync;

All of the above is essentially setup material. The gist of the composition was in iterative experimentation with Pbindefs, as can be seen below: trying out different slicing patterns and durations, working with the various segments I’d prepared beforehand in Audacity.

Pbindef(\a, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/2, \buf, ~bufs[1], \note, 0); Pbindef(\b, \instrument, \slbuf, \slice, Pser((8..15).pyramid(1), 32), \dur, 1/4, \buf, ~bufs[1], \note, 0); Pbindef(\c, \instrument, \slbuf, \slice, Pser((2..5).pyramid(1), 32), \dur, 1/4, \buf, ~bufs[0], \note, 0); Pbindef(\d, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, 0); Pbindef(\e, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, 12); Pbindef(\f, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, [-12,12,24,36]); Pbindef(\g, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4.5, \buf, ~bufs[3], \note, [12,24,36]); Pbindef(\h, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/5, \buf, ~bufs[3], \note, [-12,12,24,36]); Pbindef(\i, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(12), 1), \dur, 1/6, \buf, ~bufs[3], \note, [-24,-12,12,24,36]); Pbindef(\j, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/7, \buf, ~bufs[3], \note, [-24,-12,12,24,36], \amp, 0.3); Pdef(\k, (instrument: \slbuf, buf: ~bufs[3], slice: 2, note: [-24,-12,12,24,36], amp: 0.5, out:~rbus));

s.sync; };

The final composition was produced by running and recording the code below, which uses the handy Psym as a way to sequence the gestures developed above. The code by this point is entirely deterministic, and would produce the same piece on every run. No further editing was done, apart from normalising in Audacity.

// start from beginning fork{ Synth(\pim); 2.wait; Psym(Pseq("aabaccddeeffghiijk",1).trace).play(t); (60+14+20).wait; "shep".postln; ~shep.play; 10.wait; "off again".postln; Psym(Pseq("aabaccddeeffghiijk",1).trace).play(t); }; ) s.prepareForRecord s.record s.stopRecording

Overall, I’m happy with the piece, and glad to have been able to contribute to this very interesting project.

...

Layering visuals with SuperCollider

August 12, 2016
All, Tech

When I was at emfcamp last week, I saw a couple of instances of people layering up visuals with their code. Claudius Maximus had that going with his clive system, SonicPi (and Gibber??) can do it out of the box, and Shelly Knotts  had some sort of setup for (I think?) doing it completely within SuperCollider, with the cool idea of a webcam pointing down at her hands on the keyboard.

After a bit of thought, I’ve come up with this, just a still for now:

sclayervisuals.png

How this works: I used a $10 utility called ScreenCaptureSyphon that can amongst other things grab an application window and send it into Syphon. Then, Resolume Arena runs as a Syphon client, which lets me do almost anything including, as in the shot below, pull in the webcam and colorize. Not tried it yet, but Arena exposes its interface to OSC, so should in theory be possible to script visual changes from the SuperCollider IDE.

A reasonably concinnitous hack, if I say so myself. (MInd you, it’s the first thing I’ve ever done with my MacBook Air that turns the fan on full blast the whole time!)

...

Getting my fingers burned

July 28, 2016
All, News, Tech

Sooner or later, I’m going to have to get back to the ‘ol soldering iron:

 

JS_Synth-ttt01

I have a plan: to reconstruct the PE Minisonic you can see above, that I made while I was in school. I still have some of the boards…

A visit to CoPeCo

June 27, 2016
News

In my day job in charge of the masters programmes in music at the RCS, I can’t help but be interested in innovative new models of postgraduate study. So I was very happy to be able to visit Hamburg this week to observe the CoPeCo programme in action. This is a joint European masters in contemporary performance and composition, where the students study in four different institutions in Estonia, Sweden, France and Germany. The first cohort of the programme are just finishing up at the Hochschule für Musik und Theater, where I was able to attend a concert involving two of the current students, Émilie Girard-Charest (cello) and Sylvain Devaux (oboe), joined by HfMT student Fanis Gioles (percussion).

Speaking with them afterwards they were at pains to point out that this concert tonight was perhaps not typical, featuring as it did mostly precomposed music rather than devised, improvised, or collaborative work. For all that, it was a deeply engrossing evening. The first work was Mauricio Kagel’s Con Voce, ‘for three mute players’. The clue is in the title, with the work opening like a putative companion piece to Cage’s 4'33, the three players intense, motionless and… utterly silent. For a very long time! Or, until, as Kagel’s direction has it ’the listeners’ level of attention is in danger of crumbling’. A piece that demands, and was given, great commitment and concentration from the players.

The next piece was the premier of a new composition by American composer Heather Stebbins, who it seems had crossed paths with the CoPeCo cohort in Tallin. A great piece, called Crow song, a duet for oboe and cello, working from tentative, unvoiced sounds through to something approaching a melodic shape, and back again. Émilie then gave a performance of Enno Poppe’s work for solo cello Herz, all microtones and glisses, beautifully played with an ethereal and un-cello-like tone.

The last piece was Dmaathen by Iannis Xenakis, for oboe and percussion. I do enjoy this kind of piece, but admittedly an odd combination: I found my attention drawn more to the percussion writing than to perhaps rather dominated oboe.

I’m looking forward later this week to meeting up with Konstantina Orlandatou who coordinates the CoPeCo programme in Hamburg. Of course, this is a tragically bad time to be even thinking about European collaborations, but… it would be great if we could set up some sort of partnerships along these lines beteen the RCS and comparable institutions on the continent.

...

First public livecode

April 26, 2016
News, Tech

Last night I stumbled into my first public outing of some livecoding I’ve been working on in SuperCollider. The context was an improvisation night called In Tandem run by Bruce Wallace at the Academy of Music and Sound in Glasgow. I hadn’t intended to play, as I really don’t feel I’m ready yet, but I had my laptop and cables with me, they had a projector, so…!

I was jamming along with three other people, on bass, guitar and analog synth. It all went by in a blur, but everyone there seemed to think what I was doing was ok – mostly making grooves out of a random collection of drum samples, but running some algorithmically chosen chords as well.

The code is below: this is my screen exactly as I left it at the end of the night, mistakes and all. Although Toplap say ‘show us your screens’, they don’t say ‘show us your code’, but… it seems the right thing to do.

// the end! // they still going // if you're curious, this is SuperCollider // musci programming language // writing code live is called, er, livecoding // i'm just starting out "/Users/jsimon/Music/SuperCollider Recordings/hitzamples/".openOS;

( s.waitForBoot{ Pdef.all.clear; // clear things out ~hitzpath="/Users/jsimon/Music/SuperCollider Recordings/hitzamples/"; // a folder of samples ~hbufs = (~hitzpath ++ "*.aiff").pathMatch.collect({ |i| Buffer.read(s, i)}); // samples into an array of buffers t = TempoClock(140/60).permanent_(true); // tempo 140 bpm u = TempoClock(140/60 * 2/3).permanent_(true); // tempo 140 bpm * 2/3 SynthDef(\bf, {|out=0 buf=0 amp=0.1 freq=261.6255653006| var sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * freq/60.midicps, doneAction:2); Out.ar(out, sig * amp) }).add; // this whole chunk defines a synth patch that plays samples }; // Pdef.all.clear; //"/Users/jsimon/Music/SuperCollider Recordings/".openOS; // t.sync(140/60, 16); ) (instrument: \bf, \buf: ~hbufs.choose).play; // play an event using the synth called \bf // pick a randoms sample from the array (instrument: \bf, \buf: ~z).play; ~z = ~hbufs.choose; t.sync(140/60, 32); // gradual tempo changes possible u.sync(140/60 * 2/3, 16); v.sync(140/60 * 5/3, 16); Pbindef(\x, \instrument, \bf, \buf, ~hbufs.choose).play(t).quant_(4); Pbindef(\y, \instrument, \bf, \buf, ~hbufs.choose).play(u).quant_(4); Pbindef(\z, \instrument, \bf, \buf, ~hbufs.choose).play(v).quant_(4); Pbindef(\z, \instrument, \bf, \buf, ~hbufs.choose).play(v).quant_(4); ~g1 = {~hbufs.choose}!16; // choose sixteen samples at random = one bar full ~g2 = {~hbufs.choose}!16; Pbindef(\x, \buf, Pseq(~g1, inf)); // play those sixteen samples chosen Pbindef(\x, \buf, Pseq(~g2, inf)); // different sixteen, so, a variation. Pbindef(\x, \dur, 0.5); ~d1 = {2.rand/10}!16; ~d2 = {2.0.rand/10}!16; Pbindef(\x, \amp, Pseq(~d1, inf)); Pbindef(\x, \amp, 0.2); Pbindef(\x, \note, Prand((-36..0), inf)); Pbindef(\x, \note, Pseq({(-24..0).choose}!16, inf)); // pitch each sample down by random amount Pbindef(\x, \note, nil); Pbindef(\x).resume; Pbindef(\x).pause; Pbindef(\z).pause; Pbindef(\y).resume; // hmm. blx diminished, that's just C major! // was using \degree instead of \note, better sounds a bit more like messiaen now :) ~c = {var x = Scale.diminished2.degrees.scramble.keep(4).sort; x.insert(1,(x.removeAt(1)-12))}; // hexMajor thing also works beautifully now! ~c = {var x = Scale.hexMajor6.degrees.scramble.keep(4).sort; x.insert(1,(x.removeAt(1)-12))}; ``// next question might be changing \note, \dur and \root in a coordinated way ( Pbindef(\k, \note, Pstutter(Prand([5,7,9,11,13]*2, inf), Pfunc(~c)), \dur, 0.5, \root, 3, // best option for feeling of key change \amp, Prand((2..5)/70,inf) ).play(t); ) Pbindef(\k).pause; Pbindef(\k).pause;

...

Recursive synthesis

March 24, 2016
News, Tech

recursivesynth.jpg

I’ve been working for a while with an improvising setup that uses what is sometimes jokingly called ‘recursive synthesis’ – that is, plugging an effect unit back in to itself and experimenting with the no-input feedback sounds.

Today I’ve had some success with the next step in developing this system. I’ve written a SuperCollider patch that allows me to gate and pitchshift the feedback sounds, so that I can begin to find a way to play them musically using a keyboard. Here’s the very first run at playing this system: careful, some rather loud and uncontrolled noises here!

[audio src=“https://tedthetrumpet.files.wordpress.com/2008/09/recursor01demoedit.mp3"][/audio]

recursor01demoedit.mp3

In the picture, you can see the work-in-progress setup. There’s a cheapo DigiTech RP55 guitar pedal feeding back through a small mixing desk. I’m using a swell pedal to contral some of the parameters of the various fx from the DigiTech, particularly sweeping the pitch of the ‘whammy’ and ‘pitch shift’ functions, set up in various presets. The mixing desk is not entirely necessary, but the tone controls are useful to have in the feedback loop.

Below is the code for the SuperCollider patch. As always, my thanks to the developers of this software, and all the help received from the community on the mailing list. ( fork{ ~velbus = Bus.control.set(1); // not using yet s.sync; SynthDef(\pitchin, { | midinote = 60, gate = 1, amp = 0.1 | var in, sig, env, ratio, trans, shift, sel; trans = midinote - 60; in = SoundIn.ar([0,1]); ratio = trans.midiratio; shift = PitchShift.ar(in, pitchRatio: ratio, timeDispersion: 0.1); sel = trans.abs > 0; sig = Select.ar(sel, [in, shift]); env = EnvGen.kr(Env.adsr, gate, doneAction: 2); Out.ar(0, sig * env * amp * 37) // compensate for quiet; }).add; }; MIDIClient.init; MIDIIn.connectAll; ~on.free; ~off.free; ~cc1.free; ~notes = Array.newClear(128); // array one slot per MIDI note ~on = MIDIFunc.noteOn({ |veloc, num, chan, src| ~notes[num] = Synth(\pitchin, [\midinote, num, \amp, veloc * 0.2/127]); }); ~off = MIDIFunc.noteOff({ |veloc, num, chan, src| ~notes[num].release; }); ~cc1 = MIDIFunc.cc({ |val| val.postln; ~velbus.set(val/127*4); }, 1, 0 ); // cc1 on channel 0 = midi channel 1, not using yet )

...

Patchin' at the Hague

March 18, 2016
News, Tech

I’ve been at the Koninklijk Conservatorium in Den Haag for the last couple of days, working with a group of academics from all over Europe on the METRIC project – ‘Modernizing European Higher Music Education through Improvisation’. (If anyone can tell me in which language that acronym works, I’d love to know!)

While I’m here, I’ve been amusing myself with some work on a simple SuperCollider patch to pitchshift live audio, that I intend to use as part of my own improvisational practice. Particularly tickled to be able to say in future that I ‘worked on this patch at the Institute of Sonology in The Hague’, which is strictly speaking… more or less true!

Also brings fond memories of another piece of software associated with this institution, ACToolbox, that I used quite extensively in the past, although not so much now.

Let’s hear it for koncon!

Here’s some test code, part of a larger project: // Institute of Sonology, Den Haag, 18 Mar 2016 :) ( SynthDef(\pitchin, { | trans = 0, gate = 1 | var sig, env, ratio; sig = SoundIn.ar([0,1]); ratio = trans.midiratio; sig = PitchShift.ar(sig, pitchRatio: ratio, timeDispersion: 0.1); env = EnvGen.kr(Env.adsr, gate, doneAction: 2); Out.ar(0, sig * env); }).add; ) x = Synth(\pitchin); x.set(\trans, 12); // can sound a bit comby, try timeDispersion about half grainsize x.set(\trans, -12); x.set(\trans, 7); x.set(\trans, -7); x.free;

...