User:WillWare/Raspberry Pi hacking
I've picked up a few Raspberry Pi boards on eBay and at Maker Faire NYC. There is a lot of RPi info online including a wiki. The board is not entirely open source but still it's a Linux machine you can solder wires to. So there are lots of cool things to think about doing with it.
- Give it a Bluetooth adapter and control it from a cellphone.
- With USB connectivity, it can be a PC-based oscilloscope or a software-defined_radio.
- It could be an electronic music instrument, which is my near-term plan.
Sound generation
For MIDI and wavetable music generation on Linux, here is some background info.
- http://tldp.org/HOWTO/MIDI-HOWTO-10.html
- http://www.oreilly.de/catalog/multilinux/excerpt/ch14-08.htm - Programming wavetable devices
- http://www.solfege.org/sound-setup/
- http://www.alsa-project.org/~frank/alsa-sequencer/node1.html
- https://ccrma.stanford.edu/~craig/articles/linuxmidi/ (and in particular https://ccrma.stanford.edu/~craig/articles/linuxmidi/output/playnotes.c)
Sound generation will be done by one of these two things:
- http://www.amazon.com/Virtual-5-1-surround-External-Sound-Card/dp/B000N35A0Y - I have a few of these, but according to the reviews they aren't very good.
- http://www.amazon.com/Syba-SD-CM-UAUD-Adapter-C-Media-Chipset/dp/B001MSS6CS/ - These look better and aren't much more expensive.
Either should be able to generate sound using a SoundFont (here's the specification), and they'll probably look identical to the RPi. SoundFonts are easily downloadable 1, 2, 3, 4. This posting shows how to tell the RPi to use a USB sound card instead of its on-board sound generator, and there's another similar posting.
The touch-sensitive keyboard
In college I built a very cool touch-sensitive keyboard that scanned the keys watching for the capacitance of the human body. I've since learned that I can detect a light or heavy touch by seeing a smaller or larger capacitance, and it is easily translated into a measurable pulse width. I'm planning on either a three-octave 37-key keyboard or a four-octave 49-key keyboard. Each key is connected to one pin of a CD4051 analog multiplexer, and only one of the CD4051s is enabled at a time, so each key is scanned individually. A short low-going pulse is applied to the trigger input of a 555 timer IC in monostable mode and its threshold and discharge pins are tied to a pull-up resistor, and routed through the CD4051 to one of the keys. When you touch the key, it behaves like a capacitor to ground, increasing the output pulse width. The more firmly you touch the key, the bigger the pulse width. The trigger pulse needs to be shorter than the shortest expected RC time constant. Building the circuit on a solderless breadboard with a 100K pull-up, I see a pulse width of 4 microseconds when not touching it, and about 12 microseconds when touching it firmly. This indicates a parasitic capacitance on the breadboard of about 360 picofarads, and a total of about 1000 picofarads when touching firmly. I can get a lower parasitic capacitance by replacing the breadboard with something sparser. The touch capacitance will change a bit when protecting the copper key with a layer of sealant. I don't want the copper turning green after a few months or years of playing. I'm going to need to physically mock up a key to study realistic conditions, including putting on the sealant.
The time period is measured by a CPLD (which pulses the 555 and controls the CD4051s), probably this one since it's powerful, affordable, and is already on a breakout board. It has 256 macrocells so it can hang onto all the key codes while scanning, and the RPi can fetch them later. (There has been some FPGA work done with the Raspberry Pi.) Pulse widths can be encoded into just two or three bits per key, and the RPi can notice how quickly the pulse width increases, and convert that to a MIDI key velocity.
The keyboard will be comprised of short lengths of 14-AWG solid bare copper electrical wire running along the surface of the PVC in the pattern shown at right. The pattern shown will be rotated counter-clockwise so that its longest dimension coincides with the vertical axis of the PVC pipe. To make the case look as good as possible, I'll drill the holes for the wire before I paint the PVC. Then put in the wires, then apply the sealant.
The case, battery, auxiliary keyboard, etc
For some reason it appeals to me to build this thing into a length of PVC tubing. I'm thinking of a six-inch pipe three or four feet long. It would need a hinge along one edge. I'd like to apply a faux wood grain.
- http://www.dolls-n-daggers.com/Dolls/Forum/index.php?topic=297.0
- www-dot-ehow-dot-com slash how_4463556_make-plastic-look-like-wood.html - this is considered a spam site by Wikipedia
- www-dot-ehow-dot-com slash how_5166764_paint-woodgrains-plastic.html
I don't want to put a permanent power cord on this thing, so it needs a battery, probably a 14.4 volt battery from a cordless electric drill, going thru a Buck converter such as a LM2575. Think about how to charge the battery in some convenient way.
The Raspberry Pi needs 5 volts at 0.7 amps. Hopefully the remaining 0.3 amps will cover everything else, but the thing should be sufficiently modular that moving to a beefier regulator isn't a problem.
What I'm calling the "auxiliary" keyboard is a separate thing from the touch-sensitive keyboard, which is used to play notes. This second keyboard handles transposition, selection of soundfonts, any effects like echo or reverb or wah-wah or whatever, and any other additional control functionality that the thing ends up needing. I envision this being a hexadecimal keypad (or smaller), with a few digits of seven-segment display, or maybe something more colorful.
Miscellaneous
I want the software for the thing to be a mix of Python and C, with as much high-level behavior as possible written in Python. How to do this without killing latency? One approach is to write a fast little web server on the device in C. It has endpoints to scan the keyboard and to send note events to the sound generator, which get pinged by a fairly precise timer-based repeating request. Then there are other endpoints for other stuff which Python will ping when it needs "hardware access". The web server handles all requests fast, so Python's requests don't bog anything down. The nice thing about setting up a web server like that is I can plug in an Ethernet cable and develop all my Python code on a laptop.
Maybe the right thing is to write a kernel driver for all the hardware stuff which handles all requests fast. There could still be a web server wrapped around it for developing on a laptop, and actually the web server could then be written in Python since it's not responsible for performance. Besides, I could always use more practice writing another Linux kernel driver.
Google Custom Search Engine for Raspberry Pi: bit.ly slash T98WfM (blacklisted by Wikipedia)
Having trouble with my little USB sound card gadget, consulting http://elinux.org/R-Pi_Troubleshooting#Sound_does_not_work_at_all.2C_or_in_some_applications.
cd /opt/vc/src/hello_pi/ ./rebuild.sh cd hello_audio # sound goes out the HDMI, this works but it's the wrong sound hardware ./hello_audio.bin 1 # sound goes thru the USB sound card or the on-board sound card, depending on settings in /etc/modprobe.d/alsa-base.conf ./hello_audio.bin amixer set 'Speaker' -- 151 # make the volume audible, didn't help
I switched to a different USB sound card. If I install TiMidity++ on the RPi, I can play MIDI files just fine, and the sound quality isn't horrible. That means TiMidity++ is parsing the MIDI file into timed MIDI events (NOTE-ON, NOTE-OFF, etc) and feeding them to the sound card. So I tracked through these two stacktraces.
typedef struct { int32 time; uint8 type, channel, a, b; } MidiEvent; /* in playmidi.h */
int play_midi_file(char *fn) in playmidi.c static int play_midi_load_file(char *fn, MidiEvent **event, int32 *nsamples) in playmidi.c MidiEvent *read_midi_file(struct timidity_file *tf, int32 *count, int32 *sp, char *fn) in readmidi.c read_smf_file in readmidi.c read_smf_track in readmidi.c read_sysex_event in readmidi.c parse_sysex_event in readmidi.c
int play_midi_file(char *fn) in playmidi.c static int play_midi(MidiEvent *eventlist, int32 samples) int play_event(MidiEvent *ev) in playmidi.c static void note_on(MidiEvent *e) in playmidi.c static void start_note(MidiEvent *e, int i, int vid, int cnt) in playmidi.c void ctl_note_event(int noteID) in playmidi.c
After banging my head on this quite a bit, I stumbled across http://linux-audio.com/TiMidity-howto.html which recommended that I should do this:
timidity -iA -B2,8 -Os -EFreverb=0
What this does is it sets up TiMidity as a MIDI server with (in my case) addresses 129:0, 129:1, 129:2, and 129:3. Apparently I only need 129:0, and then I can do either of these:
pmidi -p129:0 foo.midi # play a midi file thru the sound card aconnect 64:0 129:0 # assuming 64:0 is the address for a keyboard
So you'd think I was nearly done right? All I need to do is make my keyboard look like a MIDI keyboard and give it an address like 64:0. So you would think, but you'd be mistaken.
TiMidity does the wavetable synthesis on the CPU. On the RPi, the little CPU meter shows that the CPU is pretty thoroughly choked with just four piano voices. Not a pretty sight. We need an arrangement where the sound generation is DONE IN HARDWARE although this has seemingly fallen out of favor since the Sound Blaster AWE32.
On Saturday I stopped into Microcenter and looked around, putatively shopping for a USB keyboard and mouse for the RPi (which I did indeed find and buy) but I also noticed this offering from Sparkfun. It's an Arduino shield with a music chip on it, and the chip takes serial MIDI commands at MIDI's awkward 31.25 kHz rate and plays any of a vast number of instruments. So if I can get the RPi to pump out MIDI bytes at this strange baud rate, and if the VS1053 chip sounds decent, then I'm OK. But I leave myself at the mercy of that chip for both sound quality and availability.
The right solution, of course, is to design a FPGA circuit that accepts SoundFonts and does real multi-voice wavetable synthesis. Crap, that's a lot of work. I think for now I'll throw myself on the mercy of the VS1053 and save the FPGA circuit for Revision 2.