Korg DS-10

So I've had my Korg DS-10 for almost a month now and have assembled my general thoughts below

The Synths
There are only two of them, which is somewhat restrictive. Although, supposedly you can daisy chain these mini Korgs together to make a tiny retro orchestra, so the problem could be circumvented if that's really what's holding you back.
I love the patcher that comes with it, even if it is small (the random value generator pretty much sold me) and the synths themselves come with a whole host of ADSR envelope goodness.
Possibly the best thing about the synths are their several modes of input. Besides the keyboard and sequencer (by the way, the sequencer is quite robust--allowing you to change the gate, volume, and panning of each note), the "Kaoss" touch pad rocks my world. If you're going to use the DS as a musical instrument, the Kaoss pad is the way to go. You can configure the X and Y axes to anything like pitch, volume, or panning. Then any (x,y) coordinate you touch will activate the synth with those parameters.
The drum synths get special treatment, with four voices dedicated to the overall drum track.

The Patterns
The patterns are only 16 beats long, so if you're in 4/4 and want to be able to play 16th notes, that means each pattern is only a measure long. And, since each song holds only 16 patterns, that means only 16 unique measures of music before you have to repeat something.

The demos on the DS-10 site are particularly bad in this respect, as they just show someone choosing patterns in sequential order (which they just as easily could have programmed into the Song Sequencer).
I give XSeed a lot of props for allowing you to change the size of a pattern to arbitrary length, meaning complex time signatures (7/8 or 3/4 instead of just 4/4) are possible.

The Song Sequencer
Is also unbearably short, weighing in at 100 consecutive patterns. I composed a basic song as an example. It uses all 100 patterns and at 130bpm is less than 3 minutes in length. Clearly the Korg DS-10 is not meant for composing elaborate symphonies, but is better suited for improvisation on top of a background that repeats.

Byrd by esbie

The Effects
The DS-10 comes with some nice effects, including chorus, flanger, and delay. Unfortunately the interface is sort of confusing, and I often find myself wanting to add more than one effect to a synth.

The Interface
is minimal, but not beautiful. What's sad is that I really believe they could've reached a much broader audience if they just took the time to make the interface more polished and friendly, since the 'game' itself has a pretty shallow learning curve.

The Verdict
I'll be using my DS-10 primarily for it's synth capabilities... playing with the patcher, and using the Kaoss touch pad probably more than anything else. While the patterns and song sequencer are helpful, I don't see them being able to handle more complex compositions.
That being said, This thing is pretty much a techno machine-beast. Rawk.


Contrapunkt Fall '08

Just got the recording of the Contrapunkt concert, which I'm pretty pleased with. The piece I performed is called Signal, named after the concept of the signal-to-noise ratio. The piece is essentially white noise that is filtered using LSDJ, and then run through a feedback delay in PD. While the premise is basic, the complexity comes in the real time performance of changing the noise filter, increasing the delay, etc. The piece ends with a recording of vocals sliding around on the same pitches as the white noise that's being filtered.

For those of you who haven't worked with LSDJ, the filters are based on a double digit hexidecimal number. Since I wasn't exactly sure how the filters where being applied to the white noise, I made myself a grid of expected outcomes for filtering whitenoise on c7.

Which I was pretty proud of myself for jotting down in my Moleskin. An article in the Cornell Daily Sun proclaimed Signal

"arguably the most radical piece in the concert"

And here it is:


In a Holding Pattern

Holy crap, where did November go?

Seems like things have come to a standstill after a flurry of activity. I have yet to record the gameboy piece from my midterm, and the recording of the Contrapunkt piece has yet to make its way to me (along with concert pictures). I do have a link to the newspaper article about the concert, and I'm already working on more composition ideas for my final project.

In other news, Chelsea wants me to work on a purely visual game with her next semester, which I might do if we can think of something interesting enough.

Stay tuned for more later. Maybe even pictures from the Wind Ensemble's trip to Philadelphia last weekend.


The Largest PD Patch I've Ever Made

Thought I'd take some time to show you the PureData patch that I created for this piece (Apologies for a long post length). The piece was performed this past Thursday and the performance went great, despite me not having tried it out on the studio's new MacBook Pro until the day of the concert.

In the top left corner is the top of the patch hierarchy, which shows the master sound level of the piece. (Click on an image to enlarge it.) To the right is the "pd engine" where I keep all of the patch's internal logic. And below is what I call the dacbox. Any audio that is thrown to "dac_left" "dac_right" or "dac_out" is caught, added together, and multiplied by the master_vol before sending it to the digital to analog converter (i.e. the speakers).

Under chroma you have the innerworkings of the following idea: The patch listens to the gameboy for discreet pitches (this is what "fiddle~" does in the picture) and then will play those pitches, at random, until it is told to stop. the subpatch "chroma_memory" pictured here takes care of the first problem--listening to the gameboy. Once fiddle~ finds the midi value of the note played, we mod that value by 12 (since there are 12 tones in a chromatic scale), and OR a bitwise representation of that number to the current representation of previous notes heard. (In hindsight, storing those values in a pd table would have been clearer to understand, but this approach does appeal to the bit-twiddling geek in me.) Chroma_memory can also listen to a chord progression instead using the "receive chroma_chord" object.

Chroma_selector on the right determines which note will be played at what time. Loadbang starts the metro once the patch is loaded, and the metro's tempo is being constantly changed by the random 1000 object (where the 1000 is in milliseconds). At these random time intervals, random 12 is asked to generate a note between 0 and 11 which corresponds to one of the pitches in the chromatic scale.
Chroma_validation makes sure that the note chroma_selector has chosen to play (left inlet) is a part of the subset of notes that chroma_memory has said are OK to play (right inlet). If the selection is valid, the appropriate sound file is loaded, panned (via chroma_panner not pictured here), and thrown to dac_left and dac_right.

The fft subpatch takes whitenoise (aka "noise~"), and convolves it with the signal coming from fft_timbre (in this case, vocal harmony). The twist is that conv_lvl determines how much like the fft_timbre the whitenoise actually sounds. A similar crossfade relationship is happening between chords and strings. I'm not showing the fft-analysis subpatch here because it's identical to the timbre-stamp help example in pd.

These are my midi controls, using my Axiom25. Most of the ctlin objects are assigned to faders except for the stuff at the bottom which is assigned to buttons. Button 20 determines whether chroma listens to the gameboy, or to the chords. Button 22 starts all chord progressions, while 23 stops them all. Button 24 starts the chords without counterpoint.wav, while 25 starts only counterpoint.wav. When I say chord progressions, I mean there is a soundfile for gameboy harmony, a soundfile for string harmony, a soundfile for vocal harmony, and of course the chords sent to chroma_memory. Counterpoint.wav sounds like this:

This last subpatch is just triggering the three wav files at the bottom using various techniques. The graphical bang labeled "introduction" plays the files one time through, whereas there is also a random loop (complete with killswitch), and "end loop" which is a different loop for the end of the piece.

Even seeing every part of the patch, the final product is hard to conceptualize without hearing it. All the patch really does is set up ways for me to perform, there is no actual score here. I'm hoping to record a studio performance of the piece so I can share it here, but in the meantime let me know if this patch explanation made any sense at all :)


Little Sound DJ

My childhood gameboy, an lsdj cartiridge, and a pair of new headphones later, I've started composing my piece for an October 30th Midday Music concert. The inspiration for the piece comes from chiptune music, which I stumbled upon a couple months ago.

(Gameboy compliments of my mom, headphones compliments of Phil)

Little Sound DJ (LSDJ) is a cartridge for gameboy that allows a musician to create and sequence those low bit sounds that my generation has come to know and love. The actual cartridge has been out of print for a while now, you can either flash the rom to a blank cartridge like I did (via nonfinite) or search on ebay.

As far as hardware is concerned, people seem so prefer the original Gameboy sound even though LSDJ is also compatible with Gameboy Color and Gameboy Pocket. Lots of people also modify their gameboy by adding an 1/8" jack to reduce ground noise, or adding a backlight to the screen.

In LSDJ, you can choose Square Wave, Drum Kit, Speech Synth, or Noise as your instrument of choice; then use filters, envelopes, and transpositions to alter the sound any way you like. Once that's done, there's a four voice sequencer that you can use to your hearts desire. You can do some neat things like vibrato, slides, and panning which I'm excited to play with. Other things, like the drum kit and speech synth, are not as interesting to me (probably because better alternatives have appeared since ~2000 when the software was made).

I should have sound samples for you shortly, along with a big description of the PD patch I'm writing. Basically with gameboy samples and live input from lsdj, I'm hoping to make a composition which demonstrates that lowbit square waves can be used for more than just cheesy video game music. Not that I don't love cheesy video game music :)


The B Word

I wanted to take a second to point out some peculiarities of my day today, before I go on vacation.

There was a big accident in the middle of college ave. today, causing traffic delays for cars and pedestrians alike. It looked like some people were severely injured, so I hope they're ok.

The Stealthcat Productions blog was marked as spam by Blogger. There wasn't anything particularly spammy about the blog, so I'm really curious what they're algorithm for spam detection is.

I'm listening to This American Life on the current economic crisis, trying to get a better understanding of this massive BAILOUT. What's peculiar about this is I've been getting emails like this from the engineering department:

We in Engineering Career Services have sensed a general anxiety among students about the economy and the availability of jobs.

We are not fortune-tellers, but from what we see at this time, it does not appear that the problems plaguing the financial and housing market sectors are having a major impact on the continuing need for engineers in traditional industries.

We in Career Services and our employer partners are perplexed to note a significant downturn in student applications (resume submissions) this year over last -- in some cases, resumes submitted to specific employers have been down by 80% or more -- both for large companies and smaller firms. We are hearing from employer after employer that they anticipate hundreds of engineering jobs to be filled this year, and they hope to fill many of those openings with cornell engineers.

It appears at this time that traditional engineering jobs will still be plentiful at the entry level, even if the total need slips by a few percentage points. In addition to energy (you can expect this sector to boom, regardless of who becomes our next president!), and defense, the need for engineers in most traditional industries is not likely to diminish much -- unlike the recession in the early 2000s, today's problems do not stem from the technical sector -- however, students seeking careers in the financial or related sectors may see some greater retraction as those industries reorganize. It is uncertain at this time how much of that downturn might spread to consulting and other business application industries, but recruiting by those industries has been fairly comparable to previous years.

As for on-campus recruiting, for many employers, not enough resumes are being submitted to even fill an interview schedule, and employers are cancelling schedules, not because they don't have jobs, but because they don't have enough applicants to warrant a trip to Ithaca.

As of this writing, the employers visiting campus DO have plenty of jobs they would like to fill with Cornell graduates. They learned lessons from the previous downturn to not overhire, and to not raise expectations by recruiting on campus when there are few jobs to fill.

They even emboldened the important points the letter in case you'd rather not read the whole thing. Offhand I find this letter quite amusing. The fact that our career services center staff "are not fortune tellers", the idea students have that this is just like the dot com boom, and the career center trying to allay our fears.

I think the most interesting part is that we have all these fears and no facts. And that maybe if we did have the facts, there wouldn't be as much fear. I wonder if this low recruitment rate is happening in other colleges.



Score11 is a note event preprocessor that is used in conjunction with Csound. I'm so glad I won't have to use this archaic language after this semester. Much of the language is arbitrary... it might as well be programmed in assembly since it makes about as much sense. Thankfully we've just started a new section on PD.


Stealth Cat Productions

First the good news: the game is finished!

Now for the bad news: my lizard friend isn't a part of it. He's not even close to the art style needed for this game... you see what I mean.

So what did I contribute to the game? Well besides design and moral support... not much. Sure I'm only taking it for 1 credit but I'd still like to be able to get some solid experience out of it. So I offered to be Content Lead for our next project. Content Lead is in charge of making sure UI assets are itemized and delivered in a timely fashion. Here's to hoping this next round works out better.

Since I can't host the file on my school webspace forever, there will also be a link to the game on our brand new design blog: Stealth Cat Productions. You'll also (eventually) find a more indepth look at our design process from different points of view within the team... not just mine.

For now, go ahead and download the game here, and let me know what you think in the comments. Honestly I can't get past level 3 without the cheat codes >_>


24x24 pixels is a very small canvas

When Jess asked me if I wanted to work on a game this semester, of course I said yes. But, in the spirit of taking less than 20 credits, I'm only taking the class for 1 credit. I'm stuck as the artist because the group needs artists.

Programming is being done in GameMaker, music in FruityLoops (uhhh. hmm. Better than garage band I guess.), and art in... well... we're all using different programs. It was my job to make the avatar, so there he is, in all his pixel art glory.

I won't say too much about the game itself, mostly because I'll be posting the .exe here in a week so you can play it.


ElectroAcoustic Techniques

Since I'm not sure whether or not my research is continuing this semester (Spencer, where ARE you?), I took it upon myself to find more credits, and I found them as a graduate seminar in the digital music center. Our first assignment was a 30 second tape piece using Audacity and Cubase. I wish I had the pieces of the 5 other people taking the class so I could let you listen to them; we're a really diverse group and their pieces were amazing. Unfortunately all I have is my submission, but here it is

The assignment was to do something with language, so everything you hear is a manipulation of my own voice (except for the tts voice I found from AT&T). The 0's and 1's in the middle of the piece are actually reciting parts of the text in binary.

comments, questions, concerns? We're learning Csound next.


Wii Boxing

I have a Wii, ladies and gentlemen. Hence, productivity is on the decline. I just drove back to New Mexico from San Jose, and then flew to Ithaca, NY. School started three days ago. Normally I would wait to post until I've come up with some snazzy arrangement of twinkle twinkle little star, but alas, I have to return my poor laptop to BestBuy so they can finish the job they never actually did. I have a deep and unwavering hatred for them.

Speaking of things I hate, I will never buy another Sony product. Excellent design combined with proprietary bs. Specifically, I'm talking about my camera, where the video capture is saved as some proprietary mpg that is incredibly difficult to convert. My last camera, a Canon, saved them as avi. Back to Canon for my next camera.

Not that this post had anything to do with Computer Science or tubas, but I do have this hilarious video of my mom playing wii boxing. enjoy :)


Computer works; so does Live

My Gateway laptop just celebrated its third year of existence, and I handed it off to Best Buy just before the warranty expired. Can't say I'm impressed with the geek squad. My graphics card over heats? They cleaned out the fan. My battery lasts for only half an hour? They claim it runs up to spec. My rom drive doesn't work? Alright, at least they replaced that.

As a reward for being so patient while my computer was indulging in some TLC, I have a present just for you:

I decided to sit down and spend some quality time with Ableton Live. After all, if I can't make music as compositionally basic as techno (4/4 time, 140bpm, repeated 4 bar phrases), then I don't have much hope of making more complex electroacoustic pieces.

Drum loops are still not my forte, so although I tweaked the synth drums, the loop was already a preset inside Live. Enjoy!


Imeem makes me sad inside

I use imeem to store my music online, and ugh, it sucks. Not only is it eerily reminiscent of Myspace with its customizable pages, but the service is sketchy at best. Their player works maybe half the time for me.

Needless to say I'm looking for an alternative. One fell into my lap the other day that I'm willing to try out: SoundCloud. SoundCloud is tailored to music artists who want to share and collaborate on the web. It's in Beta right now, but I see some good ideas, including time comments and waveform GUIs. Go ahead and check it out. Here's an example of their player. (Which probably won't show up in GoogleReader... sorry about that).

I also have a couple beta invites if you'd like to join? Just let me know



I'm tired of Max/MSP and PD. Graphical programming just isn't as powerful as programming, and at the moment it isn't that intuitive either. So I've been looking for other software to use, and one of them I've found is called spear.

Spear uses fft analysis on any wav file of your choice to graphically display all the frequencies that make up a complex sound. Sort of like ms paint for sound, you can then manipulate the waves any way you desire.

On the y axis is the frequency of the sound, and the x axis is time. The darker the line, the louder that frequency is. Of course, the analysis itself isn't perfect, but it's more than enough to be able to mess around with the sound. You can select harmonics, scrub, pitch shift, and time stretch (among other things).

I mention spear because I was originally going to use it in conjunction with cubase to write this song that I'm working on. It turns out that my samples (namely the human voice) are really hard to transform with fft analysis. Even though they may not make it into the final cut, here is a voice sample and its manipulation in spear.


niner33 analysis


Post Semester Wrap-up, Pre Summer Ramp-up

Played for Cornell's Graduation, flew to New Mexico to pick up my car, and drove to Mountain View, CA. Started work this week at Palm, working on devices I'm not allowed to blog about because of three separate confidentiality agreements. That's a shame.

Instead I've finally uploaded some of the CU Wind Ensemble concert from spring semester. Overture to Candide was conducted by a guest conductor (and as a result was really slow imo). The piano concerto was composed by Norbert Palej, and this was it's premiere performance. Fog tropes is actually an electric piece with a brass sextet (2 trumpets, 2 french horns, a trombone and a tuba). The result is pretty cool... it's hard to tell what is in the recording and what was actually performed live onstage.

Just ordered my midi controller, so I expect it should be here by the next post. I'm so excited :)

Overture to Candide - CU Wind Ensemble

Movimento Concertante for Piano and Wind Ensemble - CU Wind Ensemble

Fog Tropes - CU Wind Ensemble


Nothing New to Report

Just finished finals last week, so I wasn't in a position to churn out anything very creative. I'll just let you know what's in the works.

I tried to upload a video of my second project for music class, but ripping a dvd into avi takes more skill than sb had time to learn this week. It's too bad: the project consisted of me playing the tuba, Ricardo playing the electric guitar, Stephen playing the midi keyboard, Cameron manning the computer, and Henrik 'playing' the wiimote. The wiimote was hooked up to some special graphical and particle effects using PD's Gem. It was a spectacle.

Speaking of Cameron, his final project for the class is on youtube, music thing, and gizmodo. Props to him for that. Where's my project? I didn't get to perform it due to technical difficulties. Transferring my patch from Max4.5 on PC to Max5 on Mac turns out to be pretty near impossible. (remind me to rant about how much I dislike Max/MSP later)

Our video game Stage IV is complete and the showcase went well. As soon as they put it online I'll post a link so you guys can check it out. I just had a tuba recital last week, and we (hopefully) got that on tape, so I'll be looking into that as well.

Looks like I didn't get a midi keyboard controller for my birthday, so I'm gonna have to purchase one myself for the summer. If anyone has one they love, let me know about it (I'd rather not spend more than $200)


Numerical Analysis Renders

End of the year projects are coming and going. This one in particular is for my numerical analysis class. Here we use ODE solvers to guesstimate where particles will end up in a vector field. That's boring, but the results are very pretty pictures

Oh, and here's our particle video, made in Matlab ^__^

ODE Particles from esbie on Vimeo.


Melodyne: I'm not Impressed

When I first saw Melodyne, I had the same reaction that I had towards Guitar Hero: an emotional one...almost a little offended even. DDR doesn't make you a dancer, and Guitar Hero doesn't make you a rockstar (also: being a rockstar doesn't make you a musician).

If you haven't seen Melodyne, you should check it out now. Melodyne decomposes any audio file into distinct voices using Fast Fourier Transform (fft) analysis. You can then use their editor to distort these voices in any way you choose. The most popular transformation? Auto-tune.

Let me explain why I don't like Melodyne.

The Hype: people are hailing this as a new technological breakthrough that will speed us into the new age of amazingness. Let me inform you that fft has been around for decades. People are entranced that Melodyne can pick out individual instruments from the mix, but that's not very mind-blowing, considering each instrument has it's own unique timbre and wave structure. Melodyne can't distinguish between two of the same instrument, as we might expect (Sorry folks, we haven't taught the computer any counterpoint yet).

The Abuse: people are going to use Melodyne extensively and explicitly for autotune. Listen, if you can't sing on key, there are other ways to fix that... like singing lessons. FFT will produce artifacts, no matter how good the software is. Visit my tumblelog for examples of blatant autotuning.

What do I think is awesome about Melodyne? The interface gives musicians leverage. Cubase is to Audacity as Melodyne is to Spear. Spear is a dead simple fft decomposition and resynthesis software. Everything you can do in Melodyne you can already do in Spear (Spear is also free, like Audacity). It's the interface that can really make this software powerful.

What else do I like? Since I'm into the world of electronic music, I plan to exploit Melodyne in a different way... by exaggerating its artifacts or transforming audio samples into something completely unrecognizable, not by pretending that I can sing.



I've been hella busy this semester, and finally you get to see one of the reasons why! Above is a composite of just a few of the facial expressions for a character named Diana (played by Cornell's RPTA, Carolyn). These, and many other character pictures will be used in the game that I'm working on for Advanced Game Design called StageIV.

Stage IV is a story about a father who is diagnosed with cervical cancer, and how the family deals with the knowledge of his approaching death. The players chooses which character to play as and then interacts through dialogue graphs with other characters while the story unfolds.

I say graphs because trees become exponential in the amount of content needed, and a content heavy game isn't really feasible in one semester between Ben and I. Although your choices within the dialogue don't directly affect the story's outcome, your words are highly influential in how the other character feels towards you and in general. Thus the other characters (NPCs since it's a single player game) can react in a multitude of ways depending on their mood.

Doesn't sound like the most exciting game you've ever played? It's not supposed to. We're thinking of calling it 'Interactive Fiction' instead, so that the intention is clear. Ben and I are more interested in creating an artistic experience for the player than an action packed game.

Ok so what on earth am I doing for the game? All the photographs you see were taken by my Sony Cybershot. I then went into photoshop and cropped out every single background so that it's only the character. Then after some resizing, I also cranked up the contrast and turned down the saturation. Original is on the right:

The reason I crop the photos should be obvious; I want to place these characters in several different locations in the game, not just one. Resizing should be self explanatory too. I need the transitions between facial expressions to be as smooth as possible, so it feels like you're looking at a person, not a bunch of pictures of a person. Then of course they had to be scaled down to fit onscreen.

But why the contrast and desat? The desaturation is partly a design choice: Ben and I wanted the game to have a certain feel. But additionally, some of the base pictures have different amounts of lighting. Desaturating the picture helps hide that inconsistency (which means less work for SB). And why the contrast? The effects of the contrast are hard to see, but I found that it just helps the desaturated image from becoming too blah.

I'm also doing music for the game. Future post in the future :)


When Girls Procrastinate

Games for Girls, a competition hosted by UIUC, just had its deadline the week before last. I'm a part of Cornell's team, and our entry this year is a game named "Daruka".

Let me be the first to tell you that this game is sub par. Let me also tell you that out of our 5 teammates, I was the one who dropped the ball. We had two artists, two programmers, and a musician. Sounds like a dream team, so what happened?

I was busy... really busy. 26 credits of busy. But, like a good programmer, I pulled a few all nighters to get the game into shape. Of course, by that time, the artists had put together all the necessary artwork for 6 full levels of gameplay. And the two programmers, mistakenly, turned into level designers.

The result? There are 5 levels with great artwork and terrible gameplay mechanics. Not that our gameplay was absolutely riveting to begin with. But it's way more important to have solid gameplay than game content. That's practically rule 1 and we lost sight of it.

I learned something else too. I like to work hard (and play hard). But when I'm spread this thin, the work that I do just isn't as astounding as it could be. I'm sure you'll be glad to know that I'm only enrolled in 11 credits so far for next semester. And don't worry, there's much more awesome stuff on the way. In the meantime, sit back and take a look at the awesome work from our artists. Good stuff.


Electroacoustic Unpredictability

Performing electronic music is hard. Not because electronic music itself is hard, but because it's much more entertaining to watch a guitarist than it is to watch someone using a computer.

The other day a trio came to my music class to perform and talk about how they make electronic music. I thought I'd show you some of the "instruments" they use, just so you can see that I really am knee deep in some crazy stuff.

One of them uses cello that has a pickup at the bridge (it's not too unusual to amplify a cello, since it's a relatively quiet instrument for its size). He's also clamped a thick strip of metal to the bridge and will sometimes bow the metal just like a string for some very weird resonances.

The bow he uses has rosin not only on the hair of the bow, but also on the stick (ew?) Sometimes he'll bow the cello with the stick and not the hair... sometimes he'll scrape the frog of the bow against the cello's frame.

He also, using internal feedback from an open circuit board, plays the mutilated speakers sitting in front of him. In the video, he shares the idea that he wants his music to be very unpredictable, so that not even he knows what he will play. This ideology is in such stark contrast to traditional classical musicians (of which I may be considered one), who practice for endless hours in hopes to make very calculated and reproducible music.

One of the other performers uses all types of percussion instruments hooked up to electronics, and the other plays a small "instrument" containing a streamlined version of linux stuffed into a midi controller box. Electronic music ftw.



I've been really worried about defining who I am as an artist, as an internet presence, as a person. I'm putting that on hold right now because I've been trying to write this post for months. I've been wishing I could add some incredible insight on the direction the music industry is headed, but really I can't. What I can tell you is that as an independent musician I'm excited that it's going the way it is. I'm also excited because this transition into the age of digital music is a complete recreation of how the music business works. Being young and eager, I want to be one of the first to go where music is headed, wherever that is.

For those of you that don't know quite what I'm rambling about, here are some links

New Music Strategies has just started posting "100 Questions" about how music is redefined through the internet. The posts are interesting, informative, and at a basic level so you don't need to know much about technology or music to understand the concepts he brings across

Valleywag: What MySpace Music backers don't get: Recorded music is no longer a product, but advertising

Aime Street is a music community site that has a unique pricing model where tracks start out as free and as popularity increases the price of a track goes up.

Rockstar Games latest version of Grand Theft Auto (number four) has an in-game system that allows gamers to buy tracks they’re listening to via Amazon. Named “ZiT” technology, the mechanism is built into the game’s mobile phone system - as your character drives around listening to the radio, they can bookmark a song by dialing the number ZIT-555-0100 and they’ll get sent a text message back with the artist and track details.
Where has all this blog's content gone?! you may ask. I assure you that I'm working on some massive stuff right now, but I haven't had a chance to blog most of it. In the meantime, do you have thoughts on where the music industry is headed?


Operatic Orangutans?

Yesterday I went to the chamber operetta Abyssinia by Stuart Paul Duncan, a doctoral candidate in music composition at Cornell University and an acquaintance of mine. My dear friend Xander Snyder conducted one of the scenes. Although I won't reveal the whole plot to you, here is a section from the synopsis:

"Initially frightened of the cyborg orangutan, the princess quickly becomes friends with him (Scene 6), and they hatch a scheme to break out. Using a wireless computer system built into his body, the orangutan contacts an art collector who wants valuable originals that hang in the palace, previously thought to be reproductions, in exchange for which he will employ a network of double agents to help the princess escape."
But wait the best is yet to come. The following is from Scene 6:
Orange: Lady, I told ya, I'm robotic (tic-tic-tic) Looksee: I'm flesh and metal--ya got it?
Princess: You're just a monkey. Huh. This makes no sense.
Orange: Are you denying my intelligence?
Princess: No. -Yes... No... -Yes!
Orange: Well, then, I must confess, I doubt that you would pass the Turing Test
That's right, ladies and gentleman, I have been to an opera that cites the Turing Test. I can now die happy.

The opera does make me think though, exactly how possible is a cyborg-orangutan? Or, on a more serious note, are all operas this hilarious for their native speakers? I've never seen an opera written in English, and even though the plots of foreign operas are somehow outrageous, they somehow retain credibility. Maybe English just isn't a very romantic language? Of course if you'd like to know more of what I thought about the opera musically or otherwise, just ping me.


Big Buck Bunny and the render farms that give him life

Big Buck Bunny - Official Trailer from Andy Goralczyk on Vimeo.

I couldn't get away without blogging this very cute movie being made by Peach Open Movie. This is a really neat concept, since all the ideas coming out of Peach are licensed under Creative Commons. And it's an open project... in fact for donating 30 euros you can get your name in the credits as a sponsor.

What's cool is their blog hosts a ton of stuff on the process behind animated movies. You can go there to learn about their render farms (above) as well as stages of animation (below).

PLUS, if you listen to the music at the end of their trailer, there's a tuba playing. That makes this project a definite win.


Ithacan Weather in Pixel Format

I've taken special pains to document the freak weather we've been having in Ithaca. The results are as follows (a la Flickr):


SB converts a Live set to a recording, has major issues

Here is the piece I composed for my computer in music performance presentation. It's a little more upbeat than what I've been churning out recently. With the exception of the vocal tracks, all the tracks are midi.

(Wet Skin link)

Composed in Ableton Live and Rewired with Reason, this was originally a strictly performance-only piece, and so recording it turned out to be a nightmare. I started by running Live out from Jack into Audacity, but the JackPilot didn't see Audicity's inputs. It didn't see Cubase's either. It saw Ardour's, but the Ardour running on the machine had a font bug and all the text in the program came up as square boxes. UGH.

Day 2 I came back with a PD (PureData) recording patch made by my teacher. Jack didn't see PD's inputs either. I was about to try a different studio when I ran into Spencer and he offered to help. After messing around unsuccesfully in Jack, he went into Live and recorded from the master out of Live onto a track inside of Live (how completely intuitive). Then Live has a "render to disk" to output the WAV file associated with that track.

That almost worked except that I was Rewired into Reason and the render to disk option apparently can't see that connection. I ended up yanking the raw sound file out of Live project space and just using that... which, unlike the render to disk, contained exactly the audio track that I had recorded (huh?). Namely, the entire Live set in WAV format was finally in my grasp. Sorta.

I went ahead and recorded the vocals again in the studio, but real-time buffering at playback made them clip out every now and again (apologies). Still, I think I'm pretty happy with the result. I better be, since I spent as much time trying to record it as I did actually composing.


Slightly Smarter Music

The music I've been writing has been pretty mellow recently, and with good reason. I'm composing for my advanced game design course, and the musical style my team needs is a relaxed one. More on the project details to follow, but for now check out what we're trying to do with the music:

In the game, your character has an emotional state dictated by interactions with the nonplayer characters around you. In order to reflect feelings as naturally as possible, we're trying to avoid an emo bar, emo counter, or displaying any numerical value whatsoever for how you're feeling.
Instead, we are (among other things) changing the music to suit the mood. I'm not going for a drastic mood change from one part of the music to the other... but rather a small change that helps you better identify what's happening in the game.

That said, here are three variations on the same theme, each with varying levels of happiness/sadness. Let me know what you think: Are the changes between variations very obvious? almost unnoticeable? Does the happy one actually sound more happy than the other two?

(stageIV_Average link)
(stageIV_Sad link)
(stageIV_Happy link)


Applied Ambisonics

That's the incredibly obscure title I put on my resume for the research I'm a part of this semester. While my employers may not know what that means, the breakdown is as follows:

Ambisonics: a way of encoding and decoding audio information using vector math. While the current system of "surround sound" uses simple pan pots to give the illusion of position inside a sound field, ambisonics makes use of the way sound waves interact (reflection, refraction, interference) within an acoustic space. As a result, Ambi-B formats need to be calibrated for each individual speaker setup. Calibration includes identifying where the speakers are placed in the room; their angle and distance from one another.

Applied: we created an interactive sound field where users, by turning their heads, could focus on different audio point sources in a virtual sound field. The input to our program is a magnetometer on top of the person's head that tracks which direction they're is facing. Stephen demonstrates:

Preparing for BOOM included a lot of long nights, one of which I'd like to share with you now:

8:30pm - I show up at the lab. Spencer is already working. Stephen shows up 15 minutes later.

9:23pm - Spencer shows us Unpimp ze Auto

9:31pm - Spencer shows us the robot chicken Gummi Bear. We are clearly getting a lot of work done

11:15pm - Graeme and Kat join us

12:08am - Graeme points out that he's been video taped while teaching CS211, and the video made it to the front page of cornell.edu . We laugh for 10 minutes about how functions return "birthday presents".

12:40am - Spencer finds a semi functional retro game device in the lab. We take 7 minutes from work to play the math game "less than or greater than". In the process, the old batteries leak out onto Spencer's hand causing a battery burn casualty.

2:30am - most of the team calls it quits for the night, except me and Spencer. We're convinced we can solve the problem before we go to bed

3:30am - we contemplate the idea of going to bed out loud.

7:00am -bed


Procedural Video Games

Things have been quiet on my end as I ramp up for next week. At the moment I'm reaching you from the Western timezone: I've been in San Jose for the past couple days attending Google's Workshop for Women (more posts to follow on that). Among other things, I've been asked to give a presentation at Cornell's annual technology conference, Bits On Our Mind, or BOOM. I'll be giving a talk on Procedural Generation and how it relates specifically to video games and the new video game Spore, slated to be released in the US September 2008.

Since the talk will be given to ~100 high school students, I've left out all the intricate algorithms in favor of video game trailers. Here are some of the videos I'll be using:

Procedural Generation is a way of creating objects in video games using algorithms. This is especially useful if you want to create lots and lots of objects. These objects (such as buildings or trees) might take forever for humans to make one by one, but an algorithm can create thousands of these objects with minimal effort. The Game Beta .kkreiger is a quintessential example of procedural generation in action. Since the level, textures, and enemies are all created procedurally, the game in total is a mere 96kb. That means it could fit on a floppy disk. Here's a download link to try the game out if you'd like to see it in action.


Sunrise On Saturn

This semester I'm the proud owner of 23 academic credits, 3 of which belong to a class called "Computers in Music Performance". The Cornell Electroacoustic Music Center website comes complete with blog, so I'll be making some music related posts there as well (perhaps with some overlap here). Fresh off the composing press, here is the result of my first assignment in the class.

I made the entire piece in Reason with a single recording of a marimba roll, and a piano midi plugin. If you listen carefully there's actually a third voice, made with the marimba roll transposed 2 octaves, lfo at a high frequency, and a reverb patch.

(Sunrise On Saturn Link)


The Bösendorfer Piano

This monster piano was just purchased by Professor Bailey for use by the Music Department. It's a Bösendorfer Imperial Grand Piano made in Austria, complete with 97 keys for a total of 8 octaves (a normal Steinway Grand Piano has only 88 keys). The extra bass keys are in fact hidden under a wooden flap (which seems a little silly) but are remarkably the clearest bass notes I've ever heard on a piano.

A quick history lesson on pianos
Early harps were quickly transformed into the harpsichord, it's mechanical cousin. To play it, the musician presses a key, and that key is part of a mechanism with a quill attached on the end. The quill then plucks the string inside the harpsichord. By increasing the tension of the strings and changing from quills to tiny hammers, the fortepiano (meaning literally "the loudsoft" because you now could play either quietly or loudly) was born. Eventually the name was shortened to piano, and it became the instrument that we know today.

Here's a closer look at the four thick bass strings you'll probably never see on another piano. You can also see here and in the photo above that this set of strings crosses diagonally over a set of strings just underneath them. This is because the bass notes need much longer and thicker strings, and placing them diagonally across the piano body saves space. The picture below shows the metal casing for the piano's soundboard. The strings of the piano create the notes, but it's the soundboard that acts as a resonator to project that sound. That's why there are those funky holes built into the soundboard's casing: to help the sound travel (that's also why you lift the lid of the piano).
I even have a short audio clip of Spencer playing this Bösendorfer. The low notes are just incredible.

Piano Clip

NightLight Sequence

I've been quite taken by the "solarize" filter by Nikon. So much so in fact that my first ever sequence as a photographer is taking the solarize filter and running with it. These combination photographs/digital art include "Not what I expected" and "Crystal Blue" whose source files were taken in Costa Rica. The newest additions to the sequence were taken on plane rides to and from Seattle, Washington. Visit my Flickr account to see the rest of them.

How I Got Here, Musically Speaking

I did some digging around this week and found a bunch of my older compositions, each with its own story. Here they are in chronological order.
PI, the oldest piece, was created with the intent of 3 or 4 later movements. Although orchestrated here for two pianos, it's entirely possibly to play on 2 or 3 marimbas instead.

PI link

the next, JM, is actually an orchestration of the opening theme for Final Fantasy: Crystal Chronicles. The piece was made as an accompaniment for me as part of my talent for the New Mexico Junior Miss pageant. Although I regrettably don't have a recording of my singing along, you can look up the lyrics and sing them yourself! JM contains a 4 measure intro, the first verse, an instrumental, and the second-to-last chorus. Here's a snipet of the original piece "Kaze No Ne" (Sound of the Wind) for comparison.

JM link

DOS, made in my senior year of high school, was made at the request of a friend and turned out to be one of my best techno tracks so far.

DOS link

The most recent of this collection, ReasonStudy, was made last semester. It was my first attempt at using Reason during my lessons (and it shows).

ReasonStudy link

Hopefully my work is showing some sort of growth, although what kind I can't quite figure out.


Video Games are Being Taxed?

I was able to forgive New Mexico when they changed my area code. But this is a little ridiculous: the Sierra Club of New Mexico is proposing a 1% tax on video games in order to "fund programs aimed at giving school kids an outdoors education" (learn more on The Huffington Post). What's really scary is what kind of precedent this might set. Videogamevoters.org has lots of other (slightly biased) information, including a ridiculously exaggerated video, and a nifty protest wall.

Bits On Our Mind (BOOM) is Cornell University's "annual research conference that showcases student efforts and creativity in digital technology and applications". This year I'll be presenting the technology used for my independent study last semester, my team's entry for the Games 4 Girls competition hosted by UIUC, and the raytracer my team used for last semester's graphics course. I'll also be giving a talk to high school students from all across upstate New York on interactive media and it's place in the world today.
Not to mention I'm a part of Cornell's Game Curriculum Design Team. We're amassing design curriculum for an after school program for middle school and high school students. The program, although focused in game design, has the express purpose of encouraging young students to become involved in technology. I feel like choking the video game industry with taxes will only hurt all the progress we (as a collective digital community) have made so far.


Costa Rica: the myth, the madness, the pictures

So what exactly was Cornell University's Wind Ensemble doing in Costa Rica? Here's a quick synopsis:

  • conducted musical workshops in three different schools: San Isidro, Poas, and Matapalo
  • performed 8 different concerts. We played in a post office, a parade, and the Canadian Embassy, among other places
  • Donated over 60 musical instruments to music schools across the country
  • recreation (where most of the pictures came from) included hiking in the rainforest, salsa dancing, a jazz club, the beach, the Santa Cruz downtown marketplace, a bullfight, a coffee farm, volcano de Poas, and waterfall gardens.
As a result, I have a whole host of new pictures, shot with my new camera. Among these pictures is the photo I created my new banner from. I also have quite a number of videos, which I hope to put together as soon as I get the editing software set up.

these, and many other pictures can be found on my Flickr account. Enjoy!


Back from Break with a Vocoder Vengeance

I just got back from tour in Costa Rica, meaning I have over 500 pictures to sort through as well as over a dozen videos. All were taken with my new Sony Cybershot DSC-W200, a high end point-and-shoot camera. Unfortunately, I'll be flying out to Seattle this week for an interview with Microsoft and so I won't be able to put the pictures up until I get back.

Let me instead show you a short side project from last semester's independent study, made using Reason's vocoder. Here Reason combines the formant of my voice and the sound of synth vocals + strings while I recite Robert Frost's Fire and Ice. I'm particularly happy with the way the chord progression complements the structure of the poem. (apologies that the volume is so low)

frost on imeem


Computer Graphics: Part Two

The sun shoots out a light ray that hits a blue ball. The blue ball reflects the blue ray and absorbs all the other colors. That blue light ray enters your eye and that is how you see. Computer Graphics follows this same method, only backwards, and the method is called ray tracing.
First, a ray is generated for each pixel. This ray shoots out into the virtual scene being rendered. Then you test whether or not the ray hits an object. If it does, the pixel color is the color of the object. If not, the pixel color is black.

When the pixel's ray intersects a ball, you can use a ray from the light source (vL), the surface normal of the ball (n), and vector math to determine how that pixel ought to be shaded. There are several different shading methods, including Lambertian and Phong (Lambertian is shown to the left).

What the objects still lack are shadows. after the pixel's ray hits an object, you check if ray vL is blocked by an object. If it is, this point on the ball is being cast into shadow by another object, and therefore must be black.

After a while spheres can become pretty dull. What else can you render with ray tracing? Anything that can be represented mathematically (remember, you have to mathematically calculate where a line and your object intersect). Unfortunately even simple shapes like a donut become difficult to model mathematically. Instead, complicated shapes are modeled using triangles.

Modeling glass objects provides another interesting challenge. When you look at glass, you're seeing not only the colors through the glass, but also the colors reflected by the glass. It makes sense then that in ray tracing, when a ray hits glass, it splits into two rays, one that reflects off the surface and one the refracts into the surface. If there are several glass objects in the scene, you can imagine the recursive nightmare that takes place. This problem is usually circumvented by specifying a maximum recursion depth.

Last semester my partner and I had to make a ray tracer for our course in computer science (thanks to Professor James for his slides). Below are some renders: