Monday, January 26, 2009

Frequency-based aural separation for CW


For the last few months, I've been lusting after an Elecraft K3 HF Transceiver. Besides its excellent receiver performance, one of the features that I'm intrigued with is the diversity receive capability. If you add the KRX3 subreceiver module, you get a complete, no-compromises second receiver in the box. The diversity receiver feature allows you link both receivers together so they track the same frequencies, and you can connect different antennas to each receiver; presumably, you'd do this with different antennas with different receive characteristics (you can also run both receivers independently if you want). When a signal fades in one antenna, it might be increasing in strength in the other antenna. For those of you who have 802.11 access points in your house, that's why there are two antennas on the back (I don't think there are two separate receivers in the AP - they probably do some trickery to automatically switch antennas as needed, but you get the point).

That's all very interesting, but what's very cool is that the Elecraft engineers opted to put a very capable DSP unit - the human brain - in this loop, by allowing the user to put the two receivers' outputs into opposite sides of a stereo mix. Although I've never experienced this myself, since I don't (yet) own a K3, I've heard that the signals seem to "float" in the stereo field as they become more or less audible in each antenna/receiver combo. If you want to hear it first hand, listen to the samples on N1EU's web site.

If you're just interested in being able to copy a weak signal, I suppose a mono mix would have been fine. But I'll bet there are some real advantages under crowded band conditions where the human brain is able to use that stereo field separation to "sort out" the desired signal from the undesired ones. Maybe the fact that the desired signal is floating between the operator's ears in a slightly different way than the interfering signal would allow copy with the dual receiver system when it would be impossible with a single receiver.


So that got me thinking (always a dangerous thing). Is there a way to use DSP technology to separate CW signals, without separate receivers, by frequency and spread them out across a stereo field, such that an operator can get better copy than s/he could listening to the signals in a mono mix? In other words, can someone "lock on" to a CW signal better if that signal is separated from interfering signals in the stereo field?

My first thought was "certainly someone else has thought of this." And part of my reason for posting this on my blog is to see if anyone else has. I didn't find anything with the obvious Google searches, but maybe someone can point me to something...

My other thought was "why bother - with modern rigs with good roofing filters and high-quality DSP, you can dial the bandwidth down to 50Hz with no filter ringing." That's a great answer if, say, you're chasing DX and you're interesting in listening to exactly one signal at a time. But in contests, you get called off-frequency. Or, if you're a DX station, and you've got a pileup of 100 stations calling you, you probably don't want them all exactly zero-beat on you (or on your advertised split frequency) - you'd never be able to sort them out. You might be using a filter of 500, 800, or maybe even 1800 KHz to widen the net.

An experiment

I don't have any direct experience with digital signal processing, but in a previous life, I was an orchestral trombonist and also dabbled with computer music applications. One application I was familiar with that would do the job here, and wouldn't require me to learn all about DSP, was an app called PureData, or pd. pd allows you to graphically build sound processing structures, and it is free.

The basic idea is to take a monophonic input signal, and feed it into a parallel set of hi-Q bandpass filters. The output of each filter is then sent to a specific position in the stereo mix. So, for example, any CW signals of 400 Hz might end up all in the left channel, while signals of 500 Hz would end up in the center of the mix, and signals of 600 Hz would end up in the right channel.

I also recall reading that the Elecraft engineers incorporated a bit of delay into their diversity receive feature. Although I had no idea why that was important, I built the capability into my pd model, and as it turns out, it just doesn't work without it. Maybe a blog reader can explain that to me. In order to get things to "spread out" in the stereo mix sufficiently for my ears, I needed to add a delay of 0 to 10 milliseconds to the various bands.

300 Hz - Delay 0 ms - Pan hard left
350 Hz - Delay 1 ms
400 Hz - Delay 2 ms
450 Hz - Delay 3 ms
500 Hz - Delay 4 ms
550 Hz - Delay 5 ms
600 Hz - Delay 6 ms
650 Hz - Delay 7 ms
700 Hz - Delay 8 ms
750 Hz - Delay 9 ms - Pan hard right

So the interesting parameters here are likely:

- The number of bands
- The bandwidth of each band
- The assignment of each band to a position in the mix (do they smoothly transition from low to high = left to right, or does the spectrum "circle around" multiple times with increasing frequency?)
- The amount of delay
- The distribution of delay times (e.g. is each adjacent band close to the delay time of its neighbors, or is it far away from them?)

How's It Sound?

I know by now you're curious what this all sounds like. So here's an example. The audio was recorded on my rig during the recent CQWW 160m DX Contest. It's one minute long, and I had the 500 Hz crystal CW filter turned on. I chose this one minute of audio because there are a few stations fairly well spread out.

Audio Clip - no processing

Audio Clip - stereo separation processing enabled

My Verdict

To be perfectly honest, I'm not sure I can copy the weak stations any better with the processed audio better than the unprocessed audio. I'm curious if you have the same experience.

I do have to say that I think the processed audio is a bit easier to listen to, and I wonder if it might reduce operator fatigue over a long contest weekend.

Further Experimentation

The parameters of this experiment really ought to be played with more, either by me, or someone who has more experience than I do with DSP technology. I don't understand how the delay affects the stereo spread, nor do I know how to measure how well the bandpass filters in pd are actually working.

What do you think? Please share your ideas in the feedback link for this post.

Thanks and 73!


1 comment:

Gordon Good said...

Now that I own a K3, I see that the "AFX" (audio effects) feature it provides does exactly what I did in this experiment (and it sounds remarkably similar).