Wednesday, May 27, 2009

Morse Code text session #9 052709

QST DE W7VHY = = A new study shows that an 18 month mission to Mars would expose astronauts to more space radiation than NASA shielding technology can handle. AR DE W7VHY SK

Amateur Television by Lee Bond, N7KC

May 27, 2009 Educational Radio Net, PSRG 53rd Session

Television is ubiquitous… vision at a distance receiving equipment surrounds us completely. In today’s world television is a very mature technology however, if you were born before 1950, you can very likely remember when there was no television. I remember vividly the moment that I first viewed a television set as a youngster and it changed my life in the sense that I wanted to be a part of something very exciting. I was raised in a very small rural Ohio farming village and, while delivering papers late one evening, I peeked through the front door of a house rumored to have a television. This strange box was the talk of the town, the only one in the town, and the only external evidence of something going on in that house was the strange antenna structure attached to the chimney. Looking through the front door I could see a large wooden box labeled RCA and I was instantly captivated by the moving images on the small screen. From that moment in 1951 and forward I was intent on becoming an electrical engineer specializing in television technology.

The early wooden box TV’s were very expensive devices and only the well to do enjoyed the early television experience. The Chicago Worlds Fair in 1933-34 demonstrated a device for electronically reproducing a moving image and one or two of my older family members had actually seen this device. RCA, Dumont, and Zenith… among others… had worked hard in those eighteen years between 1933 and 1951 to produce these first consumer television sets. They were, of course, black and white sets which operated with vacuum tubes since any solid state device was years away in development. Bulky and deep chassis units with point to point wiring. A three dimensional wonderland when you stop to think about it.

Without doubt there were amateur radio operators who salivated at the thought of transmitting images via the ham bands but generating the images for transmission was also a very expensive proposition. There were no small battery operated camcorders or inexpensive security cameras as we know today rather still images were generated by flying spot scanner and enormous studio cameras were required to produce moving video signals. Video recorders using two inch tape eventually enabled storage of video images but, for the most part, the early programs were live events filled with memorable errors.

Technology does march on and changes came rapidly. The physical size of television components started to shrink and performance was better and better. Mass production resulted in falling prices and, eventually, low voltage solid state devices replaced the vacuum tube. Well to do amateurs could actually afford quality video equipment.

Let me preface this next part by saying that I am not going to present a detailed account of how the various slow scan and fast scan signals are developed. Any ARRL Handbook has excellent sections which are easily read. I will just hit the high points and direct the listener to these other sources.

Amateur efforts at transmitting images can be broken into two parts. Slow scan television and fast scan television. The NTSC fast scan video standard developed in the late 40’s and early 50’s remained intact until updated by color television requirements many years later. A consequence of NTSC fast scan video is the requirement of about 4.5 Mhz of data bandwidth for a high quality display and, given that the signal is transmitted as a double sideband amplitude modulated signal with the lower sideband passed through a vestigial filter the actual bandwidth required is about six Mhz. Clearly, transmitting fast scan NTSC based video in the high frequency amateur bands is out of the question. The only region with available bandwidth is UHF and up in the 70 cm wavelengths and shorter.

Enter slow scan image transmission. If you have a non moving image such as a photographic slide it is possible to sequentially sample the slide in a line by line fashion and develop an analog signal which can modulate some carrier wave and be passed to a remote receiving device where the modulation is undone… so to speak… and reconstruct the original image. The scheme uses audio frequencies which fit within the normal voice audio range of the average amateur radio transmit/receive system. The penalty for using low data rates is time. A complete image might consist of 120 or 240 lines and take several tens of seconds to transmit. So, moving images are out. A number of schemes for processing and transmitting slow scan images have evolved over the years and today’s schemes are very robust and yield excellent results with minimal equipment. Digital cameras, computers, and scan converters can be combined to make slow scan color television an exciting pursuit and within the budget of most radio enthusiasts. Listen around 14.230 Mhz to hear the characteristic warble of slow scan signals.

In contrast to slow scan techniques, fast scan television based upon the NTSC video standard has changed very little over the years. The idea is to process sequential images so fast that the persistence of the eye renders them continuous in appearance. The frame rate in analog fast scan television is 30 frames per second for black and white and about 29.94 frames per second for color. Both black & white and color frames consist of 525 lines of information which are divided into two fields each containing 262.5 lines. Bright scenes at 30 frames per second tend to flicker so the two interlaced fields double the flicker rate and flicker is generally unnoticeable to the average eye. The huge bandwidth requirement for fast scan techniques is a result of large amounts of information processed in a short amount of time.

Today’s market offers a plethora of fine video equipment perfectly suited to the amateur radio operator. Ebay is loaded with excellent video cameras from the security sector and a couple of manufacturers offer wide band video transmitters. Receiving fast scan video is as simple as capturing the radio signal and down converting it to a standard channel which is available on your home analog television set.

Analog television, especially color television, based on the NTSC standard was the first technology triumph of the 20th century in my view. Television changed our society in ways we never imagined. The digital computer is important but the television came first.

In summary, slow scan television is used principally for narrow band still image transmission in contrast to fast scan television which is associated with wide bandwidth high quality moving image transmission.

This concludes the set up discussion for Amateur Television. Are there any questions or comments with regard to tonight's discussion topic?

This is N7KC for the Wednesday night Educational Radio Net

Wednesday, May 20, 2009

Morse Code text session #8 052009

QST DE W7VHY = = Sunspot group 1017 is fading rapidly and probably will be gone by the end of the day. AR DE W7VHY SK

DSP 1: Introduction to Digital Signal Processing for Ham Radio, Bob, no. 52

Tonight is the first of what will be a multi-part session on Digital Signal Processing (DSP). Unlike Lee's impedance series we won't be building a set of concepts leading to DSP. DSP isn't a single concept; rather it is a catch-all term for several distinct and only loosely related methods. My approach to teaching DSP will be to break it down by how it is used in Ham Radio rather than by theoretical construct.

Also, DSP is used in many fields today besides Ham Radio but we will limit our discussions to Ham Radio. Of course, if you have knowledge you would like to share about non-Ham uses of DSP, you are welcome to share it but I won't be going into non-Ham uses myself.

Let's start with a very basic idea of what DSP is. First of all, the signal that is being processed, usually starts out analog and ends up analog. Whether it is the analog audio signal received by your microphone that ends up as an analog radio wave transmission, or an analog radio wave reception that ends up as an analog audio wave coming out of the speaker, it is still analog at both ends and digital only in the processing circuitry. The exception to this is the ever growing list of digital modes that start off as digital information, in the form of characters, are converted to analog for radio transmission, and end up as characters again.

Once you are representing the signal digitally you can do your digital processing. Probably the most well known use is to create better filters than you can with analog circuits. Other uses are to create displays showing signals on a frequency line so that you can tune to the signal you want, or even just point and click in some cases. Less well known but equally important is that DSP is used to convert the audio signal from your microphone to the SSB, AM or FM that is sent out. Probably the ultimate use of DSP is the Software Defined Radio. I will go into these uses in detail in subsequent sessions.

Right now I am going to go into more detail on the conversion between Analog and Digital. As I said earlier, in order to do digital signal processing, you must convert the signal from analog to digital, then do your processing, then convert it back to analog again. These steps are known as Analog to Digital Conversion (ADC) and Digital to Analog Conversion (DAC). The same abbreviations are used for the circuits that perform the steps. Usually these circuits are combined into a single Integrated Circuit, or chip, so you will commonly hear about an ADC or a DAC chip. These conversions are done by the time-slice method. In this method, voltage measurements are taken of the analog wave at regular time intervals. You will need several measurements per wavelength in order to accurately describe the analog signal. The rule of thumb for for a simple sine wave is that the frequency of taking voltage measurements should be at least twice as high as the frequency of the wave you are measuring. Keep in mind that complex waveforms can be thought of as the sum of sine waves of different frequencies and amplitudes. So to accurately represent a complex waveform your sample frequency must be at least twice as high as the highest frequency sine wave that is a component of your waveform. Typically when the voltage is measured it is stored as a 16 bit binary value. Allowing for positive and negative that gives 32,768 voltage levels from the smallest measurable above zero to the maximum. This affects the dynamic range of your system, that is, how much difference you can have between the smallest and largest amplitudes. In Software Defined Radios where a PC is an integral part of the radio, you can use floating point processing to greatly increase the number of voltage levels represented and thus increase dynamic range at least for internal processing. Ultimately it must be converted back to analog through a DAC which will probably be only 16 bit.

As we continue on and go into more detail about digital signal processing, keep in mind that for voice communication we always start with an analog audio signal, have an analog wave traveling through space and end with an analog audio signal on the other end. It is only the processing internal to the sending radio and receiving radio that works in digital.

Wednesday, May 13, 2009

Morse Code text session #7 051309

QST DE W7VHY = Today, astronomers are monitoring an enormous patch of seething magnetism churning through the suns surface in a splash of bright, white froth. AR DE W7VHY SK

COAXIAL CABLE, Jim Hadlock, K7WA, no. 51

May 13, 2009 – Educational Radio Net
Jim Hadlock K7WA


Coaxial cable transmission line is commonly used to connect our transceivers to antennas. It is used for other purposes as well, such as Matching Sections, Baluns, Traps, and Stubs. Sooner or later, each of us will probably need to add coax to our home or mobile radio systems. Tonight’s session will cover the different types of coax and the criteria to consider when deciding what cable is best for a given situation. In the spirit of amateur radio, I am going to try to avoid using manufacturer and distributor names on the air, see the Educational Radio Blog for more information on specific products and suppliers.

Coaxial type cable was first used for transatlantic telegraph cable communication in the late 1800’s. These early cables were composed of a central conductor encased in a cylindrical insulating material, and were considered coaxial because the seawater that surrounded them completed their return circuits. Later developments led in 1929 to a patented design by two engineers who worked for AT&T for a coaxial cable system intended for transmission of television signals. During World War II the military accelerated the development and production of flexible, solid-dielectric coax. It was at this time that coax acquired its now-familiar RG/U (Radio Guide Utility) numbers. After the war, amateur radio operators began using the readily available surplus coaxial cable for their antenna feedline systems.

Coaxial cable consists of an inner conductor with an insulated covering (dielectric), which is then covered with a braided wire or foil sheathing (shield). The sheathing is covered with a flexible outer jacket. Although coax has greater loss than twin-lead or open-wire transmission lines, it has some important advantages: it can be buried underground, run inside a metal mast or taped to a tower without harmful effects. In addition, our modern radios are designed for unbalanced coaxial cable transmission line.


Looking at a table of transmission line characteristics (ARRL Handbook (2005): Nominal Characteristics of Commonly Used Transmission Lines (Table 21.1) page 21.2-21.3) we see several characteristics specified for each type of cable:

RG or TypeNumber General type or Family-
Part Number Manufacturer’s part number-
Impedance 50 and 75 ohm are most common,
determined by the size and spacing
of the two conductors, and the
dielectric material between them -
Velocity Factor rate of rf propagation in the cable
compared to free space -
Capacitance two parallel conductors have capacitance -
Center Conductor AWG center conductor size and construction -
Dielectric Type dielectric material -
Shield Type shield construction -
Jacket Material jacket construction -
Jacket Outer Diameter dimension of the outer jacket -
Maximum Voltage (RMS) maximum voltage rating -
Matched Loss loss in decibels for different frequencies -
Power Handling Capability recommended maximum power -


Consider the following factors when selecting coaxial cable:
Highest frequency (HF, VHF, UHF)
Loss (determined by frequency, cable type, and length)

Coaxial Cable Types (50 Ohm Impedance):
RG-174 used for internal connections -
RG-58 comes with commercial antennas (high loss) -
RG-8X low power HF, short VHF feedlines -
RG-8 higher power or VHF/UHF feedlines -
RG-213 high power HF, short VHF feedlines -
Heliax and Hardline long feedlines at VHF or UHF -

Examples: (remember, 3 dB is 50% power loss!)

Example 1: 144/440 mHz J-Pole 50 ft feedline: 100 ft feedline
RG-58 4.5 dB loss 9.7 dB loss
RG-8X 3.35 dB loss 6.7 dB loss
RG-213 2.35 dB loss 4.7 dB loss
RG-8 1.35 dB loss 2.7 dB loss

Example 2: HF Antenna (below 50 mHz)
RG-58 2.9 dB loss
RG-8X 1.5 dB loss
RG-213 1.3 dB loss
RG-8 0.8 dB loss

Connectors and Adaptors:
UHF (PL-259 type), most common general use -
Type N, low loss, used at VHF and UHF -
BNC, used on Hand-held radios and test equipment -


Use the best coaxial cable and connectors for the requirements -
Used coaxial cable may not be a bargain -
Mostly, you get what you pay for -


ARRL Handbook (2005): Nominal Characteristics of Commonly Used Transmission Lines (Table 21.1) page 21.2-21.3, also see the ARRL Antenna Book and other references

The Wireman, Inc. (coaxial cable, antenna wire, connectors, etc.),

Radioware (Amphenol PL-259 connectors, coaxial cable, etc),

The Pacific Northwest VHF Society “Noise Floor” newsletter (Winter 2008 and Summer 2008) contains a two-part article on attaching Type N connectors to coax – must reading for anyone who wants to attach their own connnectors!:

Wednesday, May 6, 2009

Morse Code text session #6 050609

QST DE W7VHY = Earth is entering a solar wind stream, and the encounter could spark geomagnetic storms around the poles. AR DE W7VHY SK

Regulated Power Supplies, Part 2 by Lee Bond, N7KC

May 6, 2009 Educational Radio Net, PSRG 50th Session

This is the second of a two part series on the subject of regulated power supplies. The first part dealt with the, so called, linear power supply and the feedback scheme used for regulation. This final part will introduce the idea of using pulse width modulation for voltage regulation as well as describe the various schemes for handling DC to DC conversion using switching mode circuitry. Finally, we will be in a good position to contrast the linear approach and the switching mode approach to producing regulated DC voltages. Neither of these two series parts is intended to delve into the actual circuit particulars in depth but rather describe the underlying principle of the process.

Previously we talked about unregulated power supplies and various voltage sources which were lightly loaded and then heavily loaded. A voltage source was defined as a ‘perfect’ voltage generator in series with an impedance. A AA battery, for example, can be modeled as a perfect voltage generator in series with some internal impedance. How well the AA battery performs in the real world is related to how heavily it is loaded. If the load on a battery, or other voltage source, is constant and never changing then the regulation will be excellent provided that battery chemistry holds up. On the other hand, constantly changing loads on a power supply with a large internal impedance will cause the output voltage to change wildly and the load, possibly a radio transceiver, will be very unhappy.

Lets set up an example using a human as the ‘control’ element in a power supply scheme. Imagine yourself in front of a large panel with a big switch handle in front of you which controls a perfect switch with zero on resistance and infinite off resistance. This switch has a built in hold off function such that, when turned on then off it cannot be turned on again until a 10 second interval has elapsed. To your left is a large battery and you notice that it is labeled as a 12 volt unit. To your right is a load that requires 6 volts to operate properly. Above your switch is a voltmeter which is attached to the load. One last item is required before we turn this contraption on. Lets add a giant capacitor across the load. You are sitting in a chair in front of the switch and your task is to simply turn the switch on and off while watching the meter and, hopefully, manage to hold 6 volts across the load by momentarily connecting the large battery to your left. Notice that you have more voltage available than required at the load.

Ok, the moment of turn on has arrived, time zero if you please. You close the switch and the meter across the load and energy storage capacitor starts from zero volts and just as it passes 6 volts you turn off the switch. Now only the load and energy storage capacitor are connected and the capacitor supplies voltage to the load. Given that you cannot reactivate the switch until 10 seconds have elapsed from first turn on the output voltage starts to droop below the required 6 volts. After the hold off period you again close the switch and the output voltage rises above the required 6 volts a bit faster than the first time since the energy storage capacitor did not have to start from zero volts. Once again you open the switch when the output voltages passes the nominal 6 volt reading. After 10,000 operation cycles you are plenty tired of this process but you have learned something interesting. The average time that the switch is closed is 5 seconds.

Hummmm… you start to wonder if there is some relationship between the 5 second on time and the 10 second interval and the 12 volts input and the 6 volts output. So, for fun, you deliberately reduce the ‘on’ time to 2.5 seconds and watch the output voltage meter. Sure enough, the voltage falls to 3 volts average. Wondering if this could work the other way too you increase the switch ‘on’ time to 7.5 seconds and magically the output voltage increases to 9 volts average. In technical terms we are ‘modulating’ a pulse ‘on’ time or ‘width’ in relation to a fixed interval or width so complete control is possible by using PWM as in Pulse Width Modulation techniques.

You have deduced a very important principle. In a, so called, bang-bang circuit where the peak is constant and the switch is totally on or off the average output is always peak value times duty cycle. Clearly the peak voltage in our scenario was the 12 volt battery. The switch ‘on’ time compared to the interval is the duty cycle so 5 seconds divided by the 10 second interval is just ½ or 50% and the average output voltage is ½ times 12 volts or 6 volts. We have not talked about the ‘ripple’ in the output voltage. The average output is 6 volts but the actual voltage may be 7 maximum and 5 minimum depending on the size of the energy storage capacitor in relation to the load. So, even though we can produce an average of 6 volts the use may be limited by the ripple voltage present.

Another good example of this is the microwave oven. When the magnetron is turned on it supplies a constant energy to the oven cavity. To reduce the heating function, or average oven power, the magnetron is duty cycled just as in our example above. If we measured temperature of the food very carefully then we would see a temperature ripple just as we see a voltage ripple in the above example.

Using a long 10 second interval as in the example without a near infinite energy storage capacitor across the load yields an intolerable voltage ripple on the output. Shortening the duty cycle interval by using a very high switching frequency will maintain the average but reduce the maximum and minimum variations from the average output voltage.

Thus far we have not talked about actually locking the output voltage to some reference as is normally done in a ‘regulated’ power supply. The scheme for both linears and switchers is much the same. Sample the output voltage and compare it to some internal reference voltage. If there is an error voltage in the switcher then control the ‘on’ pulse width with negative feedback to null the error. If there is an error voltage in the linear then use negative feedback to control the output voltage of a follower circuit to null the error.

Now we can contrast the difference between the linear power supply and the switching mode power supply. The linear uses a ‘lossy’ element in series with the load to control output voltage vs the switching power supply which uses the average equals peak times duty cycle to control the output voltage.

The beauty of switching mode operation is that modern MOSFET semiconductor switches have ‘on’ resistance in the milliohm range so they dissipate very little energy when ‘on’ and their ‘off’ resistance is near infinite and, clearly, do not dissipate anything when open. This is in direct contrast to the linear pass element which by design must dissipate energy to operate properly. As a result, the relative efficiencies are 40 to 60% for the linear vs 85 to 94% in a well designed switcher.

The 10 second interval I used in the example was only to make it obvious and easy to compute the duty cycle. In reality the switching interval is normally very short since the switching frequency is well above 20 Khz so that acoustic output is inaudible to humans. It is not unusual to see switching frequencies in the 100 Khz range or higher with corresponding intervals of 10 microseconds or less. There are some huge benefits to high frequency operation such as very small filter elements in both value and physical size plus ferrite core inductors and transformers which weigh a mere fraction of their iron core cousins that are used in the typical 60 Hz linear supply. High frequency ripple effects can be tamed with very modest components.

Both the linear supply and the switching supply must have a reservoir of energy preceding the control element which exceeds the requirements of the load. For the linear device this energy supply will be at a voltage greater than the desired output voltage. For the switcher device this is not necessarily so given that some circuit configurations actually boost the output voltage above the input voltage.

There is great flexibility in design when using switching mode devices since they can ‘buck’ or reduce the input voltage, ‘boost’ or increase the output voltage above the input, both ‘buck-boost’ with the same circuit, invert the input voltage, or forward transfer much the same as the linear circuit. When the boost or buck type of circuitry is used the output is not isolated from the input however the forward transfer type of circuit uses a ferrite coupling transformer so offers complete isolation between input and output.

As mentioned in part 1 the major downside of the switching mode device is the possible high frequency noise generated by the very fast current transitions in the circuitry. This is in contrast to the linear which normally operates at 120 Hz from the AC line. Another downside to consider is the possibility of the output voltage going to maximum input voltage in the event of a switching fault. The major upside of the switching mode device is the very high efficiency obtained with light weight components.

This concludes the set up discussion for regulated power supplies part two. Are there any questions or comments with regard to tonight's discussion topic?

This is N7KC for the Wednesday night Educational Radio Net

Friday, May 1, 2009

Morse Code text session #5 042909

QST DE W7VHY = With a waistline one hundred times wider than Earths, the sun is so big and ponderous, you might not think it could move very quickly. AR DE W7VHY SK