]> code.delx.au - gnu-emacs/blob - test/manual/etags/html-src/algrthms.html
; Merge from origin/emacs-25
[gnu-emacs] / test / manual / etags / html-src / algrthms.html
1 <!doctype html public "-//w3c//dtd html 4.0 transitional//en">
2 <html>
3 <head>
4 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
5 <meta name="Author" content="Chip Fleming">
6 <meta name="GENERATOR" content="Mozilla/4.7 [en] (Win95; U) [Netscape]">
7 <title>Tutorial on Convolutional Coding with Viterbi Decoding--Description of the Data Generation, Convolutional Encoding, Channel Mapping and AWGN, and Quantizing Algorithms</title>
8 </head>
9 <body>
10 <a NAME="algorithms"></a><b><font face="Arial"><font size=+1>Description
11 of the Algorithms&nbsp; (Part 1)</font></font></b>
12 <p>&nbsp;The steps involved in simulating a communication channel using
13 convolutional encoding and Viterbi decoding are as follows:
14 <ul>
15 <li>
16 <a href="#genalgorithm">Generate the data</a> to be transmitted through
17 the channel-result is binary data bits</li>
18
19 <li>
20 <a href="#conalgorithm">Convolutionally encode</a> the data-result is channel
21 symbols</li>
22
23 <li>
24 <a href="#mapping">Map the one/zero channel symbols</a> onto an antipodal
25 baseband signal, producing transmitted channel symbols</li>
26
27 <li>
28 <a href="#addnoise">Add noise</a> to the transmitted channel symbols-result
29 is received channel symbols</li>
30
31 <li>
32 <a href="#quantizing">Quantize</a> the received channel levels-one bit
33 quantization is called hard-decision, and two to n bit quantization is
34 called soft-decision (n is usually three or four)</li>
35
36 <li>
37 <a href="algrthms2.html">Perform Viterbi decoding</a> on the quantized
38 received channel symbols-result is again binary data bits</li>
39
40 <li>
41 Compare the decoded data bits to the transmitted data bits and count the
42 number of errors.</li>
43 </ul>
44 <i>Many of you will notice that I left out the steps of modulating the
45 channel symbols onto a transmitted carrier, and then demodulating the received
46 carrier to recover the channel symbols. You're right, but we can accurately
47 model the effects of AWGN even though we bypass those steps.</i>
48 <p><a NAME="genalgorithm"></a><b><i><font face="Arial">Generating the Data</font></i></b>
49 <p>Generating the data to be transmitted through the channel can be accomplished
50 quite simply by using a random number generator. One that produces a uniform
51 distribution of numbers on the interval 0 to a maximum value is provided
52 in C: <tt>rand ()</tt>. Using this function, we can say that any value
53 less than half of the maximum value is a zero; any value greater than or
54 equal to half of the maximum value is a one.
55 <p><a NAME="conalgorithm"></a><b><i><font face="Arial">Convolutionally
56 Encoding the Data</font></i></b>
57 <p>Convolutionally encoding the data is accomplished using a shift register
58 and associated combinatorial logic that performs modulo-two addition. (A
59 shift register is merely a chain of flip-flops wherein the output of the
60 nth flip-flop is tied to the input of the (n+1)th flip-flop. Every time
61 the active edge of the clock occurs, the input to the flip-flop is clocked
62 through to the output, and thus the data are shifted over one stage.) The
63 combinatorial logic is often in the form of cascaded exclusive-or gates.
64 As a reminder, exclusive-or gates are two-input, one-output gates often
65 represented by the logic symbol shown below,
66 <center>
67 <p><img SRC="figs/xor_gate.gif" ALT="exclusive-or gate symbol" height=64 width=93></center>
68
69 <p>that implement the following truth-table:
70 <br>&nbsp;
71 <br>&nbsp;
72 <center><table BORDER CELLPADDING=7 WIDTH="218" >
73 <tr>
74 <td VALIGN=TOP WIDTH="28%">
75 <center><b><tt>Input A</tt></b></center>
76 </td>
77
78 <td VALIGN=TOP WIDTH="27%">
79 <center><b><tt>Input B</tt></b></center>
80 </td>
81
82 <td VALIGN=TOP WIDTH="45%">
83 <center><b><tt>Output</tt></b>
84 <p><b><tt>(A xor B)</tt></b></center>
85 </td>
86 </tr>
87
88 <tr>
89 <td VALIGN=TOP WIDTH="28%">
90 <center><tt>0</tt></center>
91 </td>
92
93 <td VALIGN=TOP WIDTH="27%">
94 <center><tt>0</tt></center>
95 </td>
96
97 <td VALIGN=TOP WIDTH="45%">
98 <center><tt>0</tt></center>
99 </td>
100 </tr>
101
102 <tr>
103 <td VALIGN=TOP WIDTH="28%">
104 <center><tt>0</tt></center>
105 </td>
106
107 <td VALIGN=TOP WIDTH="27%">
108 <center><tt>1</tt></center>
109 </td>
110
111 <td VALIGN=TOP WIDTH="45%">
112 <center><tt>1</tt></center>
113 </td>
114 </tr>
115
116 <tr>
117 <td VALIGN=TOP WIDTH="28%">
118 <center><tt>1</tt></center>
119 </td>
120
121 <td VALIGN=TOP WIDTH="27%">
122 <center><tt>0</tt></center>
123 </td>
124
125 <td VALIGN=TOP WIDTH="45%">
126 <center><tt>1</tt></center>
127 </td>
128 </tr>
129
130 <tr>
131 <td VALIGN=TOP WIDTH="28%">
132 <center><tt>1</tt></center>
133 </td>
134
135 <td VALIGN=TOP WIDTH="27%">
136 <center><tt>1</tt></center>
137 </td>
138
139 <td VALIGN=TOP WIDTH="45%">
140 <center><tt>0</tt></center>
141 </td>
142 </tr>
143 </table></center>
144
145 <p>The exclusive-or gate performs modulo-two addition of its inputs. When
146 you cascade q two-input exclusive-or gates, with the output of the first
147 one feeding one of the inputs of the second one, the output of the second
148 one feeding one of the inputs of the third one, etc., the output of the
149 last one in the chain is the modulo-two sum of the q + 1 inputs.
150 <p>Another way to illustrate the modulo-two adder, and the way that is
151 most commonly used in textbooks, is as a circle with a + symbol inside,
152 thus:
153 <center>
154 <p><img SRC="figs/ringsum.gif" ALT="modulo-two adder symbol" height=48 width=48></center>
155
156 <p>Now that we have the two basic components of the convolutional encoder
157 (flip-flops comprising the shift register and exclusive-or gates comprising
158 the associated modulo-two adders) defined, let's look at a picture of a
159 convolutional encoder for a rate 1/2, K = 3, m = 2 code:
160 <br>&nbsp;
161 <br>&nbsp;
162 <br>
163 <center>
164 <p><img SRC="figs/ce_7_5_a.gif" ALT="rate 1/2 K = 3 (7, 5) convolutional encoder" height=232 width=600></center>
165
166 <p>In this encoder, data bits are provided at a rate of k bits per second.
167 Channel symbols are output at a rate of n = 2k symbols per second. The
168 input bit is stable during the encoder cycle. The encoder cycle starts
169 when an input clock edge occurs. When the input clock edge occurs, the
170 output of the left-hand flip-flop is clocked into the right-hand flip-flop,
171 the previous input bit is clocked into the left-hand flip-flop, and a new
172 input bit becomes available. Then the outputs of the upper and lower modulo-two
173 adders become stable. The output selector (SEL A/B block) cycles through
174 two states-in the first state, it selects and outputs the output of the
175 upper modulo-two adder; in the second state, it selects and outputs the
176 output of the lower modulo-two adder.
177 <p>The encoder shown above encodes the K = 3, (7, 5) convolutional code.
178 The octal numbers 7 and 5 represent the code generator polynomials, which
179 when read in binary (111<sub>2</sub> and 101<sub>2</sub>) correspond to
180 the shift register connections to the upper and lower modulo-two adders,
181 respectively. This code has been determined to be the "best" code for rate
182 1/2, K = 3. It is the code I will use for the remaining discussion and
183 examples, for reasons that will become readily apparent when we get into
184 the Viterbi decoder algorithm.
185 <p>Let's look at an example input data stream, and the corresponding output
186 data stream:
187 <p>Let the input sequence be 010111001010001<sub>2</sub>.
188 <p>Assume that the outputs of both of the flip-flops in the shift register
189 are initially cleared, i.e. their outputs are zeroes. The first clock cycle
190 makes the first input bit, a zero, available to the encoder. The flip-flop
191 outputs are both zeroes. The inputs to the modulo-two adders are all zeroes,
192 so the output of the encoder is 00<sub>2</sub>.
193 <p>The second clock cycle makes the second input bit available to the encoder.
194 The left-hand flip-flop clocks in the previous bit, which was a zero, and
195 the right-hand flip-flop clocks in the zero output by the left-hand flip-flop.
196 The inputs to the top modulo-two adder are 100<sub>2</sub>, so the output
197 is a one. The inputs to the bottom modulo-two adder are 10<sub>2</sub>,
198 so the output is also a one. So the encoder outputs 11<sub>2</sub> for
199 the channel symbols.
200 <p>The third clock cycle makes the third input bit, a zero, available to
201 the encoder. The left-hand flip-flop clocks in the previous bit, which
202 was a one, and the right-hand flip-flop clocks in the zero from two bit-times
203 ago. The inputs to the top modulo-two adder are 010<sub>2</sub>, so the
204 output is a one. The inputs to the bottom modulo-two adder are 00<sub>2</sub>,
205 so the output is zero. So the encoder outputs 10<sub>2</sub> for the channel
206 symbols.
207 <p>And so on. The timing diagram shown below illustrates the process:
208 <br>&nbsp;
209 <br>&nbsp;
210 <br>
211 <center>
212 <p><img SRC="figs/ce_td.gif" ALT="timing diagram for rate 1/2 convolutional encoder" height=322 width=600></center>
213
214 <p><br>
215 <br>
216 <br>
217 <p>After all of the inputs have been presented to the encoder, the output
218 sequence will be:
219 <p>00 11 10 00 01 10 01 11 11 10 00 10 11 00 11<sub>2</sub>.
220 <p>Notice that I have paired the encoder outputs-the first bit in each
221 pair is the output of the upper modulo-two adder; the second bit in each
222 pair is the output of the lower modulo-two adder.
223 <p>You can see from the structure of the rate 1/2 K = 3 convolutional encoder
224 and from the example given above that each input bit has an effect on three
225 successive pairs of output symbols. That is an extremely important point
226 and that is what gives the convolutional code its error-correcting power.
227 The reason why will become evident when we get into the Viterbi decoder
228 algorithm.
229 <p>Now if we are only going to send the 15 data bits given above, in order
230 for the last bit to affect three pairs of output symbols, we need to output
231 two more pairs of symbols. This is accomplished in our example encoder
232 by clocking the convolutional encoder flip-flops two ( = m) more times,
233 while holding the input at zero. This is called "flushing" the encoder,
234 and results in two more pairs of output symbols. The final binary output
235 of the encoder is thus 00 11 10 00 01 10 01 11 11 10 00 10 11 00 11 10
236 11<sub>2</sub>. If we don't perform the flushing operation, the last m
237 bits of the message have less error-correction capability than the first
238 through (m - 1)th bits had. This is a pretty important thing to remember
239 if you're going to use this FEC technique in a burst-mode environment.
240 So's the step of clearing the shift register at the beginning of each burst.
241 The encoder must start in a known state and end in a known state for the
242 decoder to be able to reconstruct the input data sequence properly.
243 <p>Now, let's look at the encoder from another perspective. You can think
244 of the encoder as a simple state machine. The example encoder has two bits
245 of memory, so there are four possible states. Let's give the left-hand
246 flip-flop a binary weight of 2<sup>1</sup>, and the right-hand flip-flop
247 a binary weight of 2<sup>0</sup>. Initially, the encoder is in the all-zeroes
248 state. If the first input bit is a zero, the encoder stays in the all zeroes
249 state at the next clock edge. But if the input bit is a one, the encoder
250 transitions to the 10<sub>2</sub> state at the next clock edge. Then, if
251 the next input bit is zero, the encoder transitions to the 01<sub>2</sub>
252 state, otherwise, it transitions to the 11<sub>2</sub> state. The following
253 table gives the next state given the current state and the input, with
254 the states given in binary:
255 <br>&nbsp;
256 <br>&nbsp;
257 <center><table BORDER CELLSPACING=2 CELLPADDING=7 WIDTH="282" >
258 <tr>
259 <td VALIGN=TOP WIDTH="33%"><font face="Arial"><font size=-1>&nbsp;</font></font></td>
260
261 <td VALIGN=TOP COLSPAN="2" WIDTH="67%">
262 <center><a NAME="statetable"></a><b><font face="Arial"><font size=-1>Next
263 State, if&nbsp;</font></font></b></center>
264 </td>
265 </tr>
266
267 <tr>
268 <td VALIGN=TOP WIDTH="33%">
269 <center><b><font face="Arial"><font size=-1>Current State</font></font></b></center>
270 </td>
271
272 <td VALIGN=TOP WIDTH="33%">
273 <center><b><font face="Arial"><font size=-1>Input = 0:</font></font></b></center>
274 </td>
275
276 <td VALIGN=TOP WIDTH="33%">
277 <center><b><font face="Arial"><font size=-1>Input = 1:</font></font></b></center>
278 </td>
279 </tr>
280
281 <tr>
282 <td VALIGN=TOP WIDTH="33%">
283 <center><font face="Arial"><font size=-1>00</font></font></center>
284 </td>
285
286 <td VALIGN=TOP WIDTH="33%">
287 <center><font face="Arial"><font size=-1>00</font></font></center>
288 </td>
289
290 <td VALIGN=TOP WIDTH="33%">
291 <center><font face="Arial"><font size=-1>10</font></font></center>
292 </td>
293 </tr>
294
295 <tr>
296 <td VALIGN=TOP WIDTH="33%">
297 <center><font face="Arial"><font size=-1>01</font></font></center>
298 </td>
299
300 <td VALIGN=TOP WIDTH="33%">
301 <center><font face="Arial"><font size=-1>00</font></font></center>
302 </td>
303
304 <td VALIGN=TOP WIDTH="33%">
305 <center><font face="Arial"><font size=-1>10</font></font></center>
306 </td>
307 </tr>
308
309 <tr>
310 <td VALIGN=TOP WIDTH="33%">
311 <center><font face="Arial"><font size=-1>10</font></font></center>
312 </td>
313
314 <td VALIGN=TOP WIDTH="33%">
315 <center><font face="Arial"><font size=-1>01</font></font></center>
316 </td>
317
318 <td VALIGN=TOP WIDTH="33%">
319 <center><font face="Arial"><font size=-1>11</font></font></center>
320 </td>
321 </tr>
322
323 <tr>
324 <td VALIGN=TOP WIDTH="33%">
325 <center><font face="Arial"><font size=-1>11</font></font></center>
326 </td>
327
328 <td VALIGN=TOP WIDTH="33%">
329 <center><font face="Arial"><font size=-1>01</font></font></center>
330 </td>
331
332 <td VALIGN=TOP WIDTH="33%">
333 <center><font face="Arial"><font size=-1>11</font></font></center>
334 </td>
335 </tr>
336 </table></center>
337
338 <br>&nbsp;
339 <p>The above table is often called a state transition table. We'll refer
340 to it as the <tt>next state</tt> table.<tt> </tt>Now let us look at a table
341 that lists the channel output symbols, given the current state and the
342 input data, which we'll refer to as the <tt>output</tt> table:
343 <br>&nbsp;
344 <br>&nbsp;
345 <center><table BORDER CELLSPACING=2 CELLPADDING=7 WIDTH="282" >
346 <tr>
347 <td VALIGN=TOP WIDTH="33%"></td>
348
349 <td VALIGN=TOP COLSPAN="2" WIDTH="67%">
350 <center><a NAME="outputtable"></a><b><font face="Arial"><font size=-1>Output
351 Symbols, if</font></font></b></center>
352 </td>
353 </tr>
354
355 <tr>
356 <td VALIGN=TOP WIDTH="33%">
357 <center><b><font face="Arial"><font size=-1>Current State</font></font></b></center>
358 </td>
359
360 <td VALIGN=TOP WIDTH="33%">
361 <center><b><font face="Arial"><font size=-1>Input = 0:</font></font></b></center>
362 </td>
363
364 <td VALIGN=TOP WIDTH="33%">
365 <center><b><font face="Arial"><font size=-1>Input = 1:</font></font></b></center>
366 </td>
367 </tr>
368
369 <tr>
370 <td VALIGN=TOP WIDTH="33%">
371 <center><font face="Arial"><font size=-1>00</font></font></center>
372 </td>
373
374 <td VALIGN=TOP WIDTH="33%">
375 <center><font face="Arial"><font size=-1>00</font></font></center>
376 </td>
377
378 <td VALIGN=TOP WIDTH="33%">
379 <center><font face="Arial"><font size=-1>11</font></font></center>
380 </td>
381 </tr>
382
383 <tr>
384 <td VALIGN=TOP WIDTH="33%">
385 <center><font face="Arial"><font size=-1>01</font></font></center>
386 </td>
387
388 <td VALIGN=TOP WIDTH="33%">
389 <center><font face="Arial"><font size=-1>11</font></font></center>
390 </td>
391
392 <td VALIGN=TOP WIDTH="33%">
393 <center><font face="Arial"><font size=-1>00</font></font></center>
394 </td>
395 </tr>
396
397 <tr>
398 <td VALIGN=TOP WIDTH="33%">
399 <center><font face="Arial"><font size=-1>10</font></font></center>
400 </td>
401
402 <td VALIGN=TOP WIDTH="33%">
403 <center><font face="Arial"><font size=-1>10</font></font></center>
404 </td>
405
406 <td VALIGN=TOP WIDTH="33%">
407 <center><font face="Arial"><font size=-1>01</font></font></center>
408 </td>
409 </tr>
410
411 <tr>
412 <td VALIGN=TOP WIDTH="33%">
413 <center><font face="Arial"><font size=-1>11</font></font></center>
414 </td>
415
416 <td VALIGN=TOP WIDTH="33%">
417 <center><font face="Arial"><font size=-1>01</font></font></center>
418 </td>
419
420 <td VALIGN=TOP WIDTH="33%">
421 <center><font face="Arial"><font size=-1>10</font></font></center>
422 </td>
423 </tr>
424 </table></center>
425
426 <br>&nbsp;
427 <p>You should now see that with these two tables, you can completely describe
428 the behavior of the example rate 1/2, K = 3 convolutional encoder. Note
429 that both of these tables have 2<sup>(K - 1)</sup> rows, and 2<sup>k</sup>
430 columns, where K is the constraint length and k is the number of bits input
431 to the encoder for each cycle. These two tables will come in handy when
432 we start discussing the Viterbi decoder algorithm.
433 <p><a NAME="mapping"></a><b><i><font face="Arial">Mapping the Channel Symbols
434 to Signal Levels</font></i></b>
435 <p>Mapping the one/zero output of the convolutional encoder onto an antipodal
436 baseband signaling scheme is simply a matter of translating zeroes to +1s
437 and ones to -1s. This can be accomplished by performing the operation y
438 = 1 - 2x on each convolutional encoder output symbol.
439 <p><a NAME="addnoise"></a><b><i><font face="Arial">Adding Noise to the
440 Transmitted Symbols</font></i></b>
441 <p>Adding noise to the transmitted channel symbols produced by the convolutional
442 encoder involves generating Gaussian random numbers, scaling the numbers
443 according to the desired energy per symbol to noise density ratio, E<sub>s</sub>/N<sub>0</sub>,
444 and adding the scaled Gaussian random numbers to the channel symbol values.
445 <p>For the uncoded channel, E<sub>s</sub>/N<sub>0 </sub>= E<sub>b</sub>/N<sub>0</sub>,
446 since there is one channel symbol per bit.&nbsp; However, for the coded
447 channel, E<sub>s</sub>/N<sub>0 </sub>= E<sub>b</sub>/N<sub>0</sub> + 10log<sub>10</sub>(k/n).&nbsp;
448 For example, for rate 1/2 coding, E<sub>s</sub>/N<sub>0 </sub>= E<sub>b</sub>/N<sub>0</sub>
449 + 10log<sub>10</sub>(1/2) = E<sub>b</sub>/N<sub>0</sub> - 3.01 dB.&nbsp;
450 Similarly, for rate 2/3 coding, E<sub>s</sub>/N<sub>0 </sub>= E<sub>b</sub>/N<sub>0</sub>
451 + 10log<sub>10</sub>(2/3) = E<sub>b</sub>/N<sub>0</sub> - 1.76 dB.
452 <p>The Gaussian random number generator is the only interesting part of
453 this task. C only provides a uniform random number generator, <tt>rand()</tt>.
454 In order to obtain Gaussian random numbers, we take advantage of relationships
455 between uniform, Rayleigh, and Gaussian distributions:
456 <p>Given a uniform random variable U, a Rayleigh random variable R can
457 be obtained by:
458 <p><img SRC="figs/eqn01.gif" ALT="equation for Rayleigh random deviate given uniform random deviate" height=30 width=297 align=ABSCENTER>
459 <p>where&nbsp;<img SRC="figs/eqn02.gif" height=24 width=24 align=ABSCENTER>is
460 the variance of the Rayleigh random variable, and given R and a second
461 uniform random variable V, two Gaussian random variables G and H can be
462 obtained by
463 <p><i>G</i> = <i>R</i> cos <i>U</i> and <i>H</i> = <i>R</i> sin <i>V</i>.
464 <p>In the AWGN channel, the signal is corrupted by additive noise, n(t),
465 which has the power spectrum <i>No</i>/2 watts/Hz. The variance&nbsp;<img SRC="figs/eqn02.gif" ALT="variance" height=24 width=24 align=ABSBOTTOM>of
466 this noise is equal to&nbsp;<img SRC="figs/eqn03.gif" ALT="noise density div by two" height=22 width=38 align=TEXTTOP>.
467 If we set the energy per symbol <i>E<sub>s</sub></i> equal to 1, then&nbsp;<img SRC="figs/eqn04.gif" ALT="equation relating variance to SNR" height=28 width=110 align=ABSBOTTOM>.
468 So&nbsp;<img SRC="figs/eqn05.gif" ALT="equation for AWGN st dev given SNR" height=28 width=139 align=ABSCENTER>.
469 <p><a NAME="quantizing"></a><b><i><font face="Arial">Quantizing the Received
470 Channel Symbols</font></i></b>
471 <p>An ideal Viterbi decoder would work with infinite precision, or at least
472 with floating-point numbers. In practical systems, we quantize the received
473 channel symbols with one or a few bits of precision in order to reduce
474 the complexity of the Viterbi decoder, not to mention the circuits that
475 precede it. If the received channel symbols are quantized to one-bit precision
476 (&lt; 0V = 1, <u>></u> 0V = 0), the result is called hard-decision data.
477 If the received channel symbols are quantized with more than one bit of
478 precision, the result is called soft-decision data. A Viterbi decoder with
479 soft decision data inputs quantized to three or four bits of precision
480 can perform about 2 dB better than one working with hard-decision inputs.
481 The usual quantization precision is three bits. More bits provide little
482 additional improvement.
483 <p>The selection of the quantizing levels is an important design decision
484 because it can have a significant effect on the performance of the link.
485 The following is a very brief explanation of one way to set those levels.
486 Let's assume our received signal levels in the absence of noise are -1V
487 = 1, +1V = 0. With noise, our received signal has mean +/- 1 and standard
488 deviation&nbsp;<img SRC="figs/eqn05.gif" ALT="equation for AWGN st dev given SNR" height=28 width=139 align=ABSCENTER>.
489 Let's use a uniform, three-bit quantizer having the input/output relationship
490 shown in the figure below, where D is a decision level that we will calculate
491 shortly:
492 <center>
493 <p><img SRC="figs/quantize.gif" ALT="8-level quantizer function plot" height=342 width=384></center>
494
495 <p>The decision level, D, can be calculated according to the formula&nbsp;<img SRC="figs/eqn06.gif" ALT="equation for quantizer decision level" height=28 width=228 align=ABSCENTER>,
496 where E<sub>s</sub>/N<sub>0</sub> is the energy per symbol to noise density
497 ratio<i>. (The above figure was redrawn from Figure 2 of Advanced Hardware
498 Architecture's ANRS07-0795, "Soft Decision Thresholds and Effects on Viterbi
499 Performance". See the </i><a href="fecbiblio.html">bibliography</a><i>&nbsp;
500 for a link to their web pages.)</i>
501 <p>Click <a href="algrthms2.html">here</a> to proceed to the description
502 of the Viterbi decoding algorithm itself...
503 <p>Or click on one of the links below to go to the beginning of that section:
504 <p>&nbsp;<a href="tutorial.html">Introduction</a>
505 <br>&nbsp;<a href="algrthms2.html">Description of the Algorithms&nbsp;
506 (Part 2)</a>
507 <br>&nbsp;<a href="examples.html">Simulation Source Code Examples</a>
508 <br>&nbsp;<a href="simrslts.html">Example Simulation Results</a>
509 <br>&nbsp;<a href="fecbiblio.html">Bibliography</a>
510 <br>&nbsp;<a href="tutorial.html#specapps">About Spectrum Applications...</a>
511 <br>&nbsp;
512 <br>&nbsp;
513 <br>
514 <br>
515 <center>
516 <p><img SRC="figs/stripe.gif" height=6 width=600></center>
517
518 </body>
519 </html>