Posts

Showing posts with the label audio

lab 111 - wavloop

WAV file format contains audio sample data and optionally meta-data that describe the offsets of sample loops and cue points. The loop offsets are used by sampler software to generate a continuous sound, and the cue points mark the point in the sample data where the sound fades away after the note has been released. A WAV file "smpl" chunk will identify the byte sample offset of the start and end of the loop in the sound data. Using wavplay.b as a starting point I tried to loop a sampled sound. My sample data comes from Virtual Organ software GrandOrgue and the sample sets created for it. In this case I'm using the Burea Funeral Chapel sample set. My first test was simply to treat the sample as-is and loop the sound using the given offsets. This did not give good results with a notable noise as the data from the end of the sample joined with the beginning. I realized nearing the end of writing this post that the mistake I made was treating the offsets as counts of...

lab 108 - wavplay

wavplay plays a WAV file. It is merely a combination of the wav2iaf and the auplay commands already in Inferno. I have no audio in IAF format, but I am putting together 100s of GBs of wav files as I'm ripping my CD collection. % bind '#A' /dev % wavplay track.wav FILES wavplay.b

lab 107 - midiplay

NAME lab 107 - midiplay NOTES Midiplay plays back MIDI files. It uses the synthesizer I described in lab 62 and the MIDI module from lab 73. The command takes only one argument, the path to the midi file. I've included one in the lab to demonstrate. Bind the audio device before using it. % bind -a '#A' /dev % midiplay invent15.mid The synthesizer has 16 note polyphony. It uses three oscillators, one at the pitch, one at half pitch, one at double pitch. There is also a filter, two delays and a vibrato. The sample rate is 8000hz and there is one mono channel (MIDI channel events are ignored). It performs well enough to work on my laptop without JIT turned on. All the synthesizer parameters can only be tweaked from inside the code at the moment. FILES lab - 107

lab 82, again

NAME lab 82, again - txt2iaf NOTES After thinking the matter a little, I finally wrote the txt2iaf app. This allows to use any text file, with one or two columns of data, to be converted to an iaf file that can be played using auplay(1). Now, my sound analysis goes something like this: 1. Record something using a microphone. For now, I only record stuff using some app in Windows (Sound Recorder) or Linux (arecord(1)). This is because Inferno has some issues when trying to record something from /dev/audio: an annoying tick every so often that wrecks my sound analysis intentions. Maybe, I can help to fix this problem, probably related with buffering. 2. Convert the wav file obtained to an iaf file, using wav2iaf(1). 3. Get the data from the iaf file to a text format, using iaf2txt(?). 4. Read data from the text file using any data analysis package. 5. Do whatever you want to with the data. 6. If you wish or need to, output the data in a text file. 7. Using txt2iaf(?), create an iaf file...

A couple of IAF utilities

NAME lab 82 - iaf2txt and iaf2wav NOTES Currently, a couple of my friends and I are playing a little with human voices. For this purpose I wrote two applications to convert iaf files to plain text and to the wav audio format. Both apps support the standard iaf files as described in audio(6), except for the encoding: only PCM is supported for now. Why in the world would one need a text file converted from an iaf? Well... text files are easier to handle with data analysis software like the R programming language. I know MATLAB supports working with wav files directly, but there are mainly two reasons I needed an iaf to txt converter: 1. I do no use MATLAB. 2. When I wrote the iaf2txt app there was not an iaf to wav converter. Maybe R can handle wav files directly, but I do not know. I am not really sure if I need a text to iaf converter, but I am thinking about this issue. So far I do not need one. FILES lab 82 code

lab 77 - the unexpected markov

NAME lab 77 - the unexpected markov NOTES This is another unexpected use (again) of the markov program from the `The Practice of Programming', section 3.9 [1]. I wrote an implementation of markov under Limbo, and had fun feeding it with any texts (books, articles, interviews, novels ...). But recently I've been also playing with caerwyn's synth [2] which included in acme-sac, and thought why not feeding markov with music? Find out the answer by yourselves, I'll just provide some small hints in the accompanying guide file. Enjoy! [1] http://cm.bell-labs.com/cm/cs/tpop [2] synth from acme-sac under appl/synth FILES inferno-lab bachm.mp3 (the original bach file bach.mp3 )

lab 75 - scope & experiments

NAME lab75 - scope & experiments DESCRIPTION Since i did lab 67 i've been trying to improve/fix the t3 port and experiment with it. So this post has small report of the T3 port status, and some experiments under Inferno; note that they're not dependent of the handheld. T3 PORT Some of the thing i've fixed are: t3 touchscreen : Perform corrections to make it work as it should, this was important since it has direct impact on using Acme, and the rest of the system. blank the lcd : Added code to turn off the lcd while playing music, so battery lasts longer. To do so i added an blank command to the devapm.c written by Alexander Syschev, and wrote a blight script that manages the lcd backlight,to control this from acme. Since this changes apply to the t3 port, they can be found under lab 67 of the inferno-lab While i haven't been able to fix the following segfaults, i've been able to obtain dumps and open them with gdb. So i've been able to find ...

lab 73 - MIDI

NAME lab 73 - MIDI NOTES I've written a module to read MIDI format files. I needed this because I wanted more input for my software synthesizer. I was getting bored listening to the same old track, and I haven't yet come up with any computer generated music. This seemed like a quick and easy option to get a large amount of music to listen to. The code reads in the whole MIDI file and stores it in memory, using an ADT for the Header that contains an array of Tracks and each Track has an array of Events. I also wrote a midi2skini command that interleaves the multiple MIDI tracks into a single stream of skini messages for the synthesizer (see earlier labs). It sorts and orders the events converting tick delta to realtime. I've been trying this out on some bach midi files . It's been working quite nicely with the organ like sounds produced by the inferno synth. % echo 1 > /dev/jit % midi2skini bwv988-aria.mid | sequencer ... You need JIT enabled when using the sequence...

lab 62 - software synth csp style

NAME lab 62 - software synth CSP style NOTES This is the next iteration of my software synth code for Inferno. Of particular note is the embrace of CSP style as implementation technique. This was true of the code in lab 60. But this time much more of it actually works and the basic framework, the interfaces, are in place for me to extend it further. I think it makes a nice show case of CSP style programming. I thought my lexis database did too (lab 34) but this code is probably easier and more fun to play with. I want to post this now before I move onto the next phase, which may add a lot more complexity but won't illustrate any better the CSP style of programming in this application. You are encouraged to edit this code to create your own synthesizer, and use that as a way into studying CSP style. This synth comes with a basic GUI. Here is a screen shot. The GUI is bare bones, designed just so that it very easy to add new knobs for the control of filters. (One of the thin...

lab 60 - sequencer using channels

NAME lab 60 - sequencer using channels NOTES In the comments to lab 53 Rog suggested using channels to parse buffers between processes in place of the one-sample-at-a-time technique I was using in my earlier DSP attempts. This lab is an attempt at Rog's suggestion. It's one limbo file that acts as a simple sequencer and generates it's own voices. Each instrument should have an interface: f: fn(c: chan of (array of real, chan of array of real), ctl: chan of (int, real)); This function is spawned and control messages are sent on the ctl channel, and the request for samples and the response channel are sent down c . It really requires jit to be turned on to sound acceptable. Here's a sample setup, % bind -a '#A' /dev % echo 1 > /dev/jit % sequencer4 /dev/audio Uncomment sequencer.b4:60 to add a little echo to the music. I think I'm getting closer to Rog's ideas, but I'm not sure I'm still exploiting it to the fu...

lab 53 - granulator

NAME lab 53 - granulator NOTES I want to revisit the DSP code I worked on in earlier labs (3, 4, 5, 6, 7, 10, 12, and 13). The STK C++ library it's based upon has been updated since then. The latest versions include an implementation of a granulator, which is something I've wanted to do for a while. Granulation samples the music at tiny intervals then plays the grains back as many overlapping voices. With granulation the music speed can be dramatically slowed down but still retain its pitch. A good example of this is 9 beet stretch a granulation of Beethoven's 9th symphony played over 24 hours. I often listen to it at work. It blocks out surrounding chit-chat, and it isn't as distracting to me as most music. I can't listen to pop or classical music because I get tired of the repetition or else the music might interrupt my train of thought. With granulated music I can setup a playlist of three hours of barely interesting noise and get into "the zone...

lab 13 - flute

NAME lab 13 - implement the flute instrument from STK. DESCRIPTION I implemented more of the STK library but his time as a straight forward translation to a limbo module. Much of the protected classes and filters are in dsp.b as ADTs. They all share a similar interface that includes functions mk for building the object and tick for processing the next sample. The instruments are generally larger to implement but follow the same interface. They can be plugged into a signal module and then read and controlled from within signalfs. I've included a few simple modules that can be used to start a new instrument. I also tried to implement the more complicated Flute. It's close, but still doesn't sound right. It all needs a lot more debugging. To test the flute, % signalfs -a /mnt/dsp % echo add flute.dis flute > /mnt/dsp/ctl % sequencer /mnt/dsp/flute /dev/audio Sequencer...

lab 13 - sound library

No code to post tonight because it's unfinished. I'm converting all of STK to limbo, but not directly into signalfs. I'm creating a module that will contain all the sound sources, filters, and effects in the STK, with one ADT for each sound. This can then be used by signalfs to serve a file, which can be a combination of any of the ADTs, or by any other limbo app. Rog has suggested an alternative application using the shell alphabet. I will try this once the library is written. Rog pointed out how inefficient signalfs is in its current form. I agree; the performance is terrible, which makes it compleletly unusable for realtime sound support. This re-implementation will improve performance. But any hardcore DSP programmer is only likely to snicker at our attempt to implement DSP in limbo. At the end of the day I'm doing this to create a framework for ease of experimenting with DSP, not to create a sound system that will out perform all others. That is the tradeoff ...

lab 12 - oscilloscope

NAME lab 12 - implement an oscilloscope for signals from signalfs. DESCRIPTION I implemented an oscilloscope called scope to view the signals produced by signalfs, or other PCM data such as an iaf file stripped of it's header. % scope /dev/null It writes it's input to it's output, but it doesn't sound very good if directed to /dev/audio . It writes small blocks of samples with small delays between writes, making it sound very choppy. Scope tries to draw 25 frames a second, getting a tick from a timer, and reads 1/25th of a second of samples from the input, then draws it on a Tk panel. This is might be useful when recording input from a microphone % scope out.raw It takes as parameter the sample rate and number of channels, stereo or mono. CONCLUSION Not being able to listen and see the waveform at the same time makes it less useful than I hoped. H...

lab 7 - sequencer

NAME lab 7 - sequencer; create a simple sequencer that can play back a subset of SKINI messages. SETUP Inferno 4th Edition release 20040830. DESCRIPTION 2004/0928 20:37 SKINI is the sequencing language from the STK. It is a readable form of MIDI, and was designed to be "extensable and hackable"; all of which make it ideally suited to this application and Inferno. Here is a brief example // Measure number 1 =0 NoteOn 0.416667 2 72 64 NoteOff 0.208333 2 72 64 NoteOn 0 2 71 64 NoteOff 0.208333 2 71 64 NoteOn 0 2 72 64 NoteOff 0.416667 2 72 64 NoteOn 0 2 67 64 NoteOff 0.416667 2 67 64 There is one command per line. The line begins with command name followed by parameters separated by space or tabs. The second parameter is always the time delta. For the NoteOn command, the third argument is channel (or...