December 17, 2008 Educational Radio Net, PSRG 30th session, Lee Bond N7KC
The subject of tonight’s discussion material is amplitude modulation and the fundamentals thereof. This is the first of a three part series dealing with the process of transmitting voice band frequencies via radio. My next session will focus on single sideband processes and the third session will focus on frequency modulation.
If one looks at the bandwidth required to transmit various signals it is immediately apparent that three designations will suffice to describe the bandwidth required to do the job. The first segment is very narrow bandwidth and this includes CW and several of the popular digital modes. The second definable segment would be moderate bandwidth and this includes voice transmissions, facsimile, and slow scan television. The third segment is the very wide bandwidth signals such as fast scan television.
For this session we are interested in moderate bandwidth voice transmissions and, in particular, the amplitude modulation approach to transmitting voice using radio techniques. As a practical matter we are interested in somehow shifting voice range frequencies to a range more suitable to fit our antennas since the antenna is really where the ‘rubber hits the road’. We will assume that our antennas are cut to fit whatever amateur band we choose to use.
Lets define voice range frequencies for radio purposes as those starting at 20 hertz and extending to 2500 hertz. The, so called, high fidelity range extends to 20,000 hertz but most of the important voice energy required for communications is contained in the region under 3000 hertz. The ratio of the high voice frequency to the low frequency is about 125:1. In principle one can transmit audio frequencies in the same manner as ‘radio’ frequencies but the antenna dimensions would be enormous. For example, assuming standard propagation velocity, a half wavelength at 20 hertz is about 4600 miles and a half wavelength at 2500 hertz is about 37 miles. If you were to cut the antenna for midrange then it would be seriously de-tuned at either end frequency. So what to do?
Mathematics to the rescue. Everyone has heard the rule that two frequencies, if mixed, will produce sum and difference frequency spectra and this spectra will include the original two frequencies as well. This ‘mixing’ behavior is predicted using trig product identities and the mathematics is valid for audio frequencies right up through radio frequencies. Let’s play with some numbers to get a feel for how this mixing business works.
First however, we want to appreciate a couple of terms often used to describe the behavior of circuits. Linear and non linear. A linear circuit processes signals in a straight line fashion. For example, if you double the signal feeding a linear amplifier circuit then the output signal will precisely double. There is no perfectly linear active electrical circuit but it is possible to come very close to perfectly linear. A perfectly linear amplifier will process multiple signals without any interaction between signals. One simple measure of linearity is harmonic distortion. If you drive an amplifier with a single perfect sine wave signal then you would expect a perfectly linear amplifier circuit to present only a single output frequency. If a spectrum analyzer shows any energy at multiples of the driving frequency then these added frequencies are a result of harmonic distortion caused by the amplifier and harmonic distortion is an artifact of non linear performance.
On the other hand, there are circuits which have been deliberately designed to be non linear. If a non linear circuit is used as a ‘mixer’ then you can assume that at least two frequencies are being processed by this circuit. Mixing, in actuality, is really multiplication or the product of at least two frequencies as defined by the product identities in trigonometry.
Now, with that aside, let’s get back to playing with our numbers. Assume that we are feeding two audio frequencies into a non linear ‘mixer’. One frequency is 1000 hertz and the other is just twice the first or 2000 hertz. The sum output is 3000 hertz and the difference is 1000 hertz which is the same as one of the driving frequencies.
Now let’s mix another pair, this time 1000 hertz and 3000 hertz. This time the sum is 4000 hertz and the difference is 2000 hertz. A look at the spectra would show four frequencies namely 1 Khz, 2 Khz, 3 Khz, and 4 Khz.
In like manner let’s mix 1000 hertz and 100,000 hertz. The sum is 101,000 hertz and the difference is 99,000 hertz. The spectra shows our original ‘mixing’ frequencies, 1 Khz and 100 Khz, and the product frequencies of 101 Khz and 99 Khz. The maximum difference between upper sideband frequency and lower sideband frequency is just 5000 hertz so the percentage of bandwidth compared to carrier is just 5%.
Finally, let’s mix 1000 hertz and 10,000,000 hertz or 10 Mhz. This time the sum frequency is 10001000 hertz and the difference is 9999000 hertz. The spectra shows our mixing frequencies of 1 Khz and 10 Mhz plus the product frequencies of 10.001 Mhz and 9.999 Mhz. The audio range frequency, 1000 hertz, could be any frequency between 20 hertz and 2500 hertz and would produce mixing products with the ‘carrier’ frequency (10 Mhz) that extend from 9.9975 Mhz to 9.99998 Mhz and from 10.00002 Mhz to 10.0025 Mhz. The, so called, carrier frequency has energy below it called the lower sideband energy and energy above it called the upper sideband energy. The maximum difference between upper sideband frequency and lower sideband frequency is just 5000 hertz so the percentage of bandwidth compared to carrier is just 0.05%. This indicates that both the carrier frequency and sideband frequencies will ‘fit’ our antennas nicely in the band we choose to transmit within. Voice modulating frequencies in the range of 20 to 2500 hertz are so far removed from the carrier energy that they are filtered out of the final product.
Voice modulation is the process of imprinting intelligent baseband information upon a signal suitable for radio transmission. In the case of amplitude modulation the baseband voice information causes the instantaneous carrier amplitude to change and this change can be detected at great distance to reconstruct the original baseband voice information.
Nothing is free and so it is with amplitude modulation. To 100% modulate a 1000 watt carrier using AM it is necessary to provide 500 watts of audio power. The 500 watts ends up being split between the upper sideband and the lower sideband and, spectrally, the carrier amplitude remains constant. Amplitude modulation is inefficient from the power standpoint since the full carrier power is transmitted but this power contributes nothing to the impressed intelligence. Amplitude modulation is also inefficient from the bandwidth standpoint since identical upper and lower sideband information is transmitted requiring a bandwidth twice as large as the modulating signal.
Recovering the impressed information from an AM signal can be as simple as detecting the, so called, envelope of the signal. This amounts to rectifying the signal and filtering out the carrier. What remains is just the analog of the original modulating signal. This is the precise method used by simple ‘crystal’ sets which are still popular with experimenters. One particularly nasty artifact of operating AM is the heterodyning of adjacent carriers. Radio operators put up with this howling until improved techniques made AM obsolete.
In summary, amplitude modulation or AM is a very simple but inefficient means of impressing information on a ‘carrier’ signal. The AM process is very straight forward and easy to understand but lacks the elegance of improved methods of communication.
This concludes the set up for the discussion of AM. Are there any questions or comments?
This is N7KC for the Educational Radio Net
Tuesday, December 16, 2008
Wednesday, December 10, 2008
EmComm, Brian Daly, WB7OML, week 29
Amateur Radio Emergency Communications, or “Emcomm”
Brian Daly, WB7OML
Let’s start out by defining - what is a communication emergency? According to the definition in the ARRL Level 1 course, a communication emergency exists when a critical communication failure puts the public at risk.
What are some circumstances that can overload or damage critical day-to-day communication systems?
What are some potential Communications Emergencies in Seattle?
Can a communication emergency occur in “normal” circumstances? Yes, definitely, some examples being:
So what makes a good emcomm volunteer? Amateur emcomm volunteers come from a variety of backgrounds with a range of skills and experience. Emcomm volunteers share one common characteristic – the desire to help others without personal gain, the ability to work as a member of a team, and to take direction from others. An emergency situation will bring a lot of stress and pressure, thus an emcomm volunteer needs the ability to think and act quickly.
Where do you fit in? We amateurs bring equipment, skills, and frequencies necessary to create emergency communications networks under poor conditions. We have licenses; we have pre-authorization for national and inter-national communication. Many of the skills we bring to emcomm are the same things we do on a day-to-day basis; other skills are specific to emcomm and needs to be learned through courses like the ARRL ARECC Level 1 and through drills and exercises.
Radio equipment, frequencies and basic radio skills are not enough. Without specific emergency communication skills, you can easily become part of the problem.
It is also important to know your limits of responsibility as an emergency communicator. What an emcomm volunteer is not - we need to know where to draw the line, what our limitations are. We are not “first responders”, generally we do not have authority – we don’t make decisions for our served agencies, nor do we place demands on them. But we can make some decisions – the decision on whether to participate or not, and decisions affecting your own life and safety. In general we are not in charge – we are there to fulfill the needs of the served agency.
You cannot “do it all”. If the served agency runs short of specialized help, it is not your job to fill it especially if you are not trained for the job. But you can fill in an urgent need or perform jobs where communication is an integral part, if you are qualified.
And remember, leave your ego at the door!
There are differences between “day-to-day” communication and “emergency communication”. First and foremost, in day-to-day communications there is no real pressure to “get the message through”. No one’s life depends on it. You do things at your leisure. Emcomm can involve both amateurs and non-amateurs, it happens in real-time, there is a lot going on simultaneously perhaps on several nets, there may be little or no warning, you may have to set up and be operational anywhere in a short period of time, and there is no schedule. Public service events may come close to emcomm, as they can be “planned disasters” – about the only know piece is the schedule!
Your job as an emcomm volunteers is simple – communicating is job #1. Notice it is “communicating” and not “amateur radio” – there is a significant difference. Our job is to get the message through regardless of how that happens. We as amateurs have many tools available for this job, and amateur radio is just one of those tools. FAX machine, Internet email, cell phones, landline phones, amateur radio, CB radio, FRS radio, served agency radio – all of these are at our disposal and should be considered. We bring communicating skills to the table, not just amateur radio. There are stories of amateurs that pass long supply lists over the radio, tying up repeaters or frequencies, while sitting next to an operational FAX machine. It is not our job to “show off” our radios – it is our job to “communicate”. Just think about the best and fastest way to send it. Of course, when all else fails we do have the amateur radio.
So what happens during a communication emergency? Some scenarios will not require immediate action, for example during a “watch” or “warning” for a severe storm. This is the period to make sure you go-kit is together, and you are ready to go if called. Other scenarios will happen fast and will require immediate need – for example, an earthquake. Once the need for emcomm is identified, the served agency will put out the call for amateurs to help. Most emcomm groups have defined procedures for activation, such as defining a “rapid response team”. Nets will be established to handle resources and logistics, such as the processing and directing of incoming volunteers. Once these operations begin, things can happen quickly – message traffic grows, confusion exists. Do we have relief operators? Do we have food and water? Where will the volunteers sleep? Do we have batteries, fuel, other logistical needs? Communication assignments need to be made – shelters, gathering damage reports, handling supply requests and other logistical needs of the served agency. Nets will be established, rearranged and disassembled as the needs arise. Volunteers need to remain flexible. Finally, the demands of the emcomm communication effort will decrease, nets can be closed, and volunteers released.
But the emcomm event does not end when the last net is shut down. This starts the after action report period, which will help to improve the response next time around.
There are many additional skills to learn to help you become a successful emcomm volunteer – knowing who your served agency is, their organization, basic communication skills, message handling, net operating, and of course, personal safety, survival and health considerations. We will cover more of these topics on this net in the coming months. Also, the ARRL Amateur Radio Emergency Communication Course Level 1 is another opportunity to learn these skills.
Brian Daly, WB7OML
Let’s start out by defining - what is a communication emergency? According to the definition in the ARRL Level 1 course, a communication emergency exists when a critical communication failure puts the public at risk.
What are some circumstances that can overload or damage critical day-to-day communication systems?
- Storm knocks down telephone lines or radio towers
- A massive increase in the use of a communication system that causes it to be come overloaded
- Failure of a key component in a system
- Earthquake
- Volcano
What are some potential Communications Emergencies in Seattle?
Can a communication emergency occur in “normal” circumstances? Yes, definitely, some examples being:
- Underground cables being dug up
- Fires in telephone equipment buildings
- Car crash knocks down a key telephone pole
- 9-1-1 systems can fail
- Hospital systems can fail
So what makes a good emcomm volunteer? Amateur emcomm volunteers come from a variety of backgrounds with a range of skills and experience. Emcomm volunteers share one common characteristic – the desire to help others without personal gain, the ability to work as a member of a team, and to take direction from others. An emergency situation will bring a lot of stress and pressure, thus an emcomm volunteer needs the ability to think and act quickly.
Where do you fit in? We amateurs bring equipment, skills, and frequencies necessary to create emergency communications networks under poor conditions. We have licenses; we have pre-authorization for national and inter-national communication. Many of the skills we bring to emcomm are the same things we do on a day-to-day basis; other skills are specific to emcomm and needs to be learned through courses like the ARRL ARECC Level 1 and through drills and exercises.
Radio equipment, frequencies and basic radio skills are not enough. Without specific emergency communication skills, you can easily become part of the problem.
It is also important to know your limits of responsibility as an emergency communicator. What an emcomm volunteer is not - we need to know where to draw the line, what our limitations are. We are not “first responders”, generally we do not have authority – we don’t make decisions for our served agencies, nor do we place demands on them. But we can make some decisions – the decision on whether to participate or not, and decisions affecting your own life and safety. In general we are not in charge – we are there to fulfill the needs of the served agency.
You cannot “do it all”. If the served agency runs short of specialized help, it is not your job to fill it especially if you are not trained for the job. But you can fill in an urgent need or perform jobs where communication is an integral part, if you are qualified.
And remember, leave your ego at the door!
There are differences between “day-to-day” communication and “emergency communication”. First and foremost, in day-to-day communications there is no real pressure to “get the message through”. No one’s life depends on it. You do things at your leisure. Emcomm can involve both amateurs and non-amateurs, it happens in real-time, there is a lot going on simultaneously perhaps on several nets, there may be little or no warning, you may have to set up and be operational anywhere in a short period of time, and there is no schedule. Public service events may come close to emcomm, as they can be “planned disasters” – about the only know piece is the schedule!
Your job as an emcomm volunteers is simple – communicating is job #1. Notice it is “communicating” and not “amateur radio” – there is a significant difference. Our job is to get the message through regardless of how that happens. We as amateurs have many tools available for this job, and amateur radio is just one of those tools. FAX machine, Internet email, cell phones, landline phones, amateur radio, CB radio, FRS radio, served agency radio – all of these are at our disposal and should be considered. We bring communicating skills to the table, not just amateur radio. There are stories of amateurs that pass long supply lists over the radio, tying up repeaters or frequencies, while sitting next to an operational FAX machine. It is not our job to “show off” our radios – it is our job to “communicate”. Just think about the best and fastest way to send it. Of course, when all else fails we do have the amateur radio.
So what happens during a communication emergency? Some scenarios will not require immediate action, for example during a “watch” or “warning” for a severe storm. This is the period to make sure you go-kit is together, and you are ready to go if called. Other scenarios will happen fast and will require immediate need – for example, an earthquake. Once the need for emcomm is identified, the served agency will put out the call for amateurs to help. Most emcomm groups have defined procedures for activation, such as defining a “rapid response team”. Nets will be established to handle resources and logistics, such as the processing and directing of incoming volunteers. Once these operations begin, things can happen quickly – message traffic grows, confusion exists. Do we have relief operators? Do we have food and water? Where will the volunteers sleep? Do we have batteries, fuel, other logistical needs? Communication assignments need to be made – shelters, gathering damage reports, handling supply requests and other logistical needs of the served agency. Nets will be established, rearranged and disassembled as the needs arise. Volunteers need to remain flexible. Finally, the demands of the emcomm communication effort will decrease, nets can be closed, and volunteers released.
But the emcomm event does not end when the last net is shut down. This starts the after action report period, which will help to improve the response next time around.
There are many additional skills to learn to help you become a successful emcomm volunteer – knowing who your served agency is, their organization, basic communication skills, message handling, net operating, and of course, personal safety, survival and health considerations. We will cover more of these topics on this net in the coming months. Also, the ARRL Amateur Radio Emergency Communication Course Level 1 is another opportunity to learn these skills.
Wednesday, December 3, 2008
BALUNS, Jim K7WA, No. 28
BALUNS
December 3, 2008 – Educational Radio Net
Jim Hadlock K7WA
What does a balun do?
What happens if you don't use one?
Bal-Un is a term formed from the words balanced and unbalanced. It refers to a device used to couple an Unbalanced transmission line to a Balanced load. In the real world, we use a balun to couple a coaxial transmission to a balanced antenna, such as a dipole.
Coaxial transmission lines are commonly used to connect our transceivers to antennas. Coax comes in several sizes and types for different applications. It consists of an inner conductor with an insulated covering (dielectric), which is then covered with a braided wire sheathing (shield). The sheathing is covered with a flexible outer jacket. Coax is weatherproof and may be buried underground, run inside a metal mast or taped to a tower without harmful effects. At the transceiver, the center conductor is connected to the transmitter output (or receiver input), and the shield is connected to the chassis. This arrangements works well with an unbalanced load, such as a vertical monopole antenna fed against a ground plane or radials. However, when coax is used to feed a balanced load, such as a dipole antenna, some provision should be made for converting from the unbalanced transmission line to the balanced load. Otherwise, RF currents will flow on the outer conductor of the coax, compromising the effectiveness of the antenna.
To understand this problem, think of a coaxial transmission line as a wire centered inside a metal pipe. When we connect the coaxial transmission line to our transmitter, the RF current flows on the center wire and on the inside surface of the pipe. This is due to what's called the "skin effect". The "skin effect" describes how RF currents flow in a thin layer on the surface of a conductor, proportional in depth to the wavelength of the signal. If we connect the other end of the coaxial transmission to a balanced antenna, such as a dipole, RF current from the center wire flows to one side of the antenna. The current from the inside surface of the pipe however, is connected to two conductors: the other side of the antenna and the outside surface of the pipe. Current flowing on the outside of the pipe is subtracted from the current that should be flowing on the antenna creating voltage and current nodes on the outside surface of the pipe back down to the transmitter where it is grounded. To go back to our coax fed dipole example, RF current on the outside surface of the coaxial transmission line shield will distort the radiation pattern of the antenna and detract from its effectiveness. It may also contribute to television interference.
A properly connected balun will reduce or eliminate the RF current flow on the outside surface of the coaxial transmission line shield. While the most common use of a balun is at the feedpoint of a balanced antenna, they are also used at the output of an antenna tuner to feed a balanced transmission line (Twin Lead) and even part way down a feedline to convert from balanced transmission line to coaxial transmission line (as in the G5RV antenna).
There are several types of baluns available to radio amateurs and described in the literature. Let's begin with the Current Balun (also called the Choke Balun). Current Baluns have become popular for application in the high frequency range (1.8 mHz to 30 mHz) because they are simple, cheap, and effective. In its simplest form, a Current Balun consists of a number of turns of coaxial cable wound into a close coil at the feedpoint of the antenna. The size of the coil is determined by the operating frequency. For example, the installation directions for the Cushcraft A3S tri-band yagi specify eight turns of RG8/U coaxial cable with a six inch diameter. This coil is a high impedance RF choke at the operating frequency of the antenna and prevents RF current from flowing on the outside of the coaxial transmission line shield. Another approach to the Current Balun was introduced by Walter Maxwell, W2DU. This involves slipping a stack of high-permeability ferrite beads over the coaxial transmission line at the feedpoint of the antenna. The stack of ferrite beads creates a high impedance effectively suppressing any RF current from flowing down the outside surface of the transmission line. Current Baluns and ferrite bead kits are available from many sources.
Another approach is the Voltage Balun as described by Jerry Sevick, W2FMI, and others. This design uses inductors to produce equal, opposite phase voltages into the two resistances, or halves of the antenna. An additional feature of the Voltage Balun is that, by using a combination of inductors as a broad-band RF transformer, it can accommodate impedance conversion in addition to balancing the RF voltages. Typical impedance conversion is 4:1, although Sevick describes transmission line transformers with many other ratios in his classic book: Understanding, Building, and Using Baluns and Ununs.
A third balun technique, most often used at VHF and UHF, is the Coaxial Balun made from a half wavelength loop of coaxial transmission line and presenting a high impedance to any RF current that might otherwise flow on the outer shield of the coaxial transmission line. The half wavelength Coaxial Balun gives a 4:1 impedance step-up.
While I have described how a balun improves the effectiveness of a coax fed balanced antenna, it also has other uses. Consider a vertical antenna with elevated radials. The outer surface of the coaxial transmission line shield will "look" to the antenna like another radial. A Current Balun at the feedpoint of the vertical will prevent RF current from flowing on the feedline. According to author John Devoldere, ON4UN, in Low- Band DXing: "Is it harmful to put a current balun on all the coaxial antenna feed lines for all your antennas? Not at all. If the feed point is symmetric, there will be no current flowing and the beads will do no harm. As a matter of fact they may help reduce unwanted coupling from antennas into feed lines of other nearby antennas."
Baluns are an effective means of preventing unwanted RF current on the outer shield of coaxial feedlines from distorting antenna patterns, as well as reducing TVI (radiation coupling into nearby television sets, house wiring, etc.) and RF in the shack.
References:
ARRL Technical Information Service: An Analysis of the Balun, by Bruce A. Eggers
WA9NEW: www.arrl.org/tis/info/pdf/9409061.pdf
Some Aspects of the Balun Problem, by Walter Maxwell W2DU:
www.w2du.com/r2ch21.pdf
Baluns: What They Do and How They Do It, by Roy W. Lewallen W7EL:
www.eznec.com/Amateur/Articles/Baluns.pdf
Understanding, Building, and Using Baluns and Ununs, by Jerry Sevick W2FMI, CQ
Communications, Inc.
Low-Band DXing (4th Edition), by John Devoldere ON4UN, The ARRL, Inc.
The ARRL Antenna Book (21st Edition), The ARRL, Inc.
The ARRL Handbook, The ARRL, Inc.
Palomer Engineers (1:1 Current Balun Kit): www.palomer-engineers.com
The Radio Works (Baluns, Coax, Antenna Parts, etc.): www.radioworks.com
December 3, 2008 – Educational Radio Net
Jim Hadlock K7WA
What does a balun do?
What happens if you don't use one?
Bal-Un is a term formed from the words balanced and unbalanced. It refers to a device used to couple an Unbalanced transmission line to a Balanced load. In the real world, we use a balun to couple a coaxial transmission to a balanced antenna, such as a dipole.
Coaxial transmission lines are commonly used to connect our transceivers to antennas. Coax comes in several sizes and types for different applications. It consists of an inner conductor with an insulated covering (dielectric), which is then covered with a braided wire sheathing (shield). The sheathing is covered with a flexible outer jacket. Coax is weatherproof and may be buried underground, run inside a metal mast or taped to a tower without harmful effects. At the transceiver, the center conductor is connected to the transmitter output (or receiver input), and the shield is connected to the chassis. This arrangements works well with an unbalanced load, such as a vertical monopole antenna fed against a ground plane or radials. However, when coax is used to feed a balanced load, such as a dipole antenna, some provision should be made for converting from the unbalanced transmission line to the balanced load. Otherwise, RF currents will flow on the outer conductor of the coax, compromising the effectiveness of the antenna.
To understand this problem, think of a coaxial transmission line as a wire centered inside a metal pipe. When we connect the coaxial transmission line to our transmitter, the RF current flows on the center wire and on the inside surface of the pipe. This is due to what's called the "skin effect". The "skin effect" describes how RF currents flow in a thin layer on the surface of a conductor, proportional in depth to the wavelength of the signal. If we connect the other end of the coaxial transmission to a balanced antenna, such as a dipole, RF current from the center wire flows to one side of the antenna. The current from the inside surface of the pipe however, is connected to two conductors: the other side of the antenna and the outside surface of the pipe. Current flowing on the outside of the pipe is subtracted from the current that should be flowing on the antenna creating voltage and current nodes on the outside surface of the pipe back down to the transmitter where it is grounded. To go back to our coax fed dipole example, RF current on the outside surface of the coaxial transmission line shield will distort the radiation pattern of the antenna and detract from its effectiveness. It may also contribute to television interference.
A properly connected balun will reduce or eliminate the RF current flow on the outside surface of the coaxial transmission line shield. While the most common use of a balun is at the feedpoint of a balanced antenna, they are also used at the output of an antenna tuner to feed a balanced transmission line (Twin Lead) and even part way down a feedline to convert from balanced transmission line to coaxial transmission line (as in the G5RV antenna).
There are several types of baluns available to radio amateurs and described in the literature. Let's begin with the Current Balun (also called the Choke Balun). Current Baluns have become popular for application in the high frequency range (1.8 mHz to 30 mHz) because they are simple, cheap, and effective. In its simplest form, a Current Balun consists of a number of turns of coaxial cable wound into a close coil at the feedpoint of the antenna. The size of the coil is determined by the operating frequency. For example, the installation directions for the Cushcraft A3S tri-band yagi specify eight turns of RG8/U coaxial cable with a six inch diameter. This coil is a high impedance RF choke at the operating frequency of the antenna and prevents RF current from flowing on the outside of the coaxial transmission line shield. Another approach to the Current Balun was introduced by Walter Maxwell, W2DU. This involves slipping a stack of high-permeability ferrite beads over the coaxial transmission line at the feedpoint of the antenna. The stack of ferrite beads creates a high impedance effectively suppressing any RF current from flowing down the outside surface of the transmission line. Current Baluns and ferrite bead kits are available from many sources.
Another approach is the Voltage Balun as described by Jerry Sevick, W2FMI, and others. This design uses inductors to produce equal, opposite phase voltages into the two resistances, or halves of the antenna. An additional feature of the Voltage Balun is that, by using a combination of inductors as a broad-band RF transformer, it can accommodate impedance conversion in addition to balancing the RF voltages. Typical impedance conversion is 4:1, although Sevick describes transmission line transformers with many other ratios in his classic book: Understanding, Building, and Using Baluns and Ununs.
A third balun technique, most often used at VHF and UHF, is the Coaxial Balun made from a half wavelength loop of coaxial transmission line and presenting a high impedance to any RF current that might otherwise flow on the outer shield of the coaxial transmission line. The half wavelength Coaxial Balun gives a 4:1 impedance step-up.
While I have described how a balun improves the effectiveness of a coax fed balanced antenna, it also has other uses. Consider a vertical antenna with elevated radials. The outer surface of the coaxial transmission line shield will "look" to the antenna like another radial. A Current Balun at the feedpoint of the vertical will prevent RF current from flowing on the feedline. According to author John Devoldere, ON4UN, in Low- Band DXing: "Is it harmful to put a current balun on all the coaxial antenna feed lines for all your antennas? Not at all. If the feed point is symmetric, there will be no current flowing and the beads will do no harm. As a matter of fact they may help reduce unwanted coupling from antennas into feed lines of other nearby antennas."
Baluns are an effective means of preventing unwanted RF current on the outer shield of coaxial feedlines from distorting antenna patterns, as well as reducing TVI (radiation coupling into nearby television sets, house wiring, etc.) and RF in the shack.
References:
ARRL Technical Information Service: An Analysis of the Balun, by Bruce A. Eggers
WA9NEW: www.arrl.org/tis/info/pdf/9409061.pdf
Some Aspects of the Balun Problem, by Walter Maxwell W2DU:
www.w2du.com/r2ch21.pdf
Baluns: What They Do and How They Do It, by Roy W. Lewallen W7EL:
www.eznec.com/Amateur/Articles/Baluns.pdf
Understanding, Building, and Using Baluns and Ununs, by Jerry Sevick W2FMI, CQ
Communications, Inc.
Low-Band DXing (4th Edition), by John Devoldere ON4UN, The ARRL, Inc.
The ARRL Antenna Book (21st Edition), The ARRL, Inc.
The ARRL Handbook, The ARRL, Inc.
Palomer Engineers (1:1 Current Balun Kit): www.palomer-engineers.com
The Radio Works (Baluns, Coax, Antenna Parts, etc.): www.radioworks.com
Tuesday, November 25, 2008
Open Mic Night, Bob and Lee, Session 27
Because it is the Wednesday before Thanksgiving we will be doing an informal session this week. There is no prepared topic. Come with your questions, tips, stories, etc.
If you would prefer, you can add your question in the comments section and we will address it.
If you would prefer, you can add your question in the comments section and we will address it.
Wednesday, November 19, 2008
Electrical Resonance
November 19, 2008 Educational Radio Net, PSRG 26th session, Lee Bond N7KC
The impedance series is now history. During the course of that 13 week series we looked at several of the most fundamental ideas in the physics of electrical phenomenon and, hopefully, gained some practical knowledge of how these ideas link together to form a basis for our understanding of all things electrical. Let's exercise some of this earlier impedance series material and see how it can be applied to solve practical problems which are routinely encountered on the bench. The first study examined the potentiometer or "pot" and its behavior when used as a voltage divider. The second study examined how energy is moved from a source to a load and also considered the effect of a transmission line in this process. This third study will look at the phenomenon of resonance in both the mechanical and electrical worlds and extend the idea to antennas.
Lets consider mechanical systems first to get an intuitive feel for resonance.
We have all experienced autos which produce nasty sounds at certain speeds or engines where certain parts tend to vibrate depending on engine rpm. Suppose that we have an engine with some sort of attached bracket and the engine is at idle. If we very slowly advance the engine throttle to increase rpm’s there may be a engine rotational speed where the bracket starts to vibrate very strongly. If we continue to advance the throttle, the bracket vibration diminishes and disappears altogether. The mechanical configuration of the bracket has a "natural" frequency which is the frequency of vibration which develops when excited by the engines complicated sounds.
Another demonstration example is a wine goblet shattering when excited by acoustic energy which matches the natural frequency of the goblet. Finally, lets consider the pendulum in a clock. If the clock is unwound and the pendulum is activated we know that it will oscillate back and forth with diminishing amplitude until it stops. The pendulum has a natural frequency primarily determined by its length. If the clock is wound, however, there is a bit of clock mechanism which "taps" the pendulum very slightly at the correct moment to keep the pendulum swinging at constant amplitude, at the natural frequency, and for as long as the energy to produce the tap is present.
All of these examples show that very small forcing energies at the natural frequency of a mechanical system may cause dramatic vibration amplitudes due to resonance. Resonance frequency is where the forcing frequency matches the natural frequency of a system.
The situation in the electrical world is much the same as in the mechanical. Small signals (forcing energy) can appear dramatically larger due to electrical circuit resonance. Such circuits are always structured with resistance, inductance, and capacitive elements. The resistance element stands alone in being immune to the effects of forcing frequencies since the resistance converts energy directly to heat and stores nothing. In contrast to the resistance element, inductance and capacitive elements do not dissipate energy rather they store energy in the form of electric or magnetic fields during one portion of the cycle and return it to the circuit during the next.
Inductance associated with a forcing frequency creates a reactance product which increases with frequency whereas capacitance associated with a forcing frequency creates a reactance product which decreases with frequency. Therefore, given an assembly of resistance, inductive reactance, and capacitive reactance, there is a possibility that at some specific frequency the reactive components value will be equal and opposite hence cancel since they carry opposite sign. Electrical resonance generally indicates that net reactance is zero at a particular frequency. At this resonant frequency the circuit impedance is purely resistive.
One good example of electrical resonance is given by the tuning circuit in a typical radio receiver. The broadcast band, for instance, contains various amounts of energy from 550 Khz to 1500 Khz. The radio needs to respond to a specific station located in this continuum of signals. Using a parallel resonant circuit which is tunable allows one to slide across the band in search of the desired signal. Very slight amounts of received energy from the antenna will excite the resonant circuit and produce signal levels much higher than the excitation level. It is important to note that incoming signal energy is not increased by resonance rather signal amplitude is increased which is then amplified by a suitable active circuit.
Trapping circuits can be constructed from resistive, capacitive, and inductive elements as well. To facilitate this function the elements should be wired in series. At the resonant frequency the net reactance will be zero leaving only resistance as the circuit element. At frequencies off resonance the circuit impedance will always be larger than at resonance due to the combination of series resistance and predominate reactance.
Another example of impedance changing with frequency is the antenna. Lets consider a simple dipole cut to the center of any band. If you were to connect an antenna analyzer to the dipole and sweep from the lower to the upper band edge you would see the antennas feed point impedance, or combination of resistance and reactance, dip at the cut frequency and show only a resistive component. This is the radiation resistance of the antenna at resonance. Resonance frequency is that frequency where net reactance is zero.
Given that the antenna inductance and capacitance values are fixed, at frequencies above the resonance point the antenna is too long, inductive reactance increases, and the antenna impedance increases. Conversely, at frequencies below the resonance point the antenna is too short, capacitive reactance increases and the antenna impedance increases. Since maximum power transfer occurs when transmission line characteristic impedance matches the radiation resistance of the antenna, the trick is to adjust antenna elements such that, at the desired operating frequency, the net reactance is zero and maximum radio frequency current flows in the antenna elements. Since the dipole has a feed point impedance of about 72 ohms at resonance, driving it directly with 50 ohm coaxial line and a 1:1 balun would yield a VSWR of 72/50 or 1.44:1 minimum. Various matching schemes are available to adjust the feed point impedance to match the transmission line.
In certain circumstances electrical resonance can be a nuisance. For example, consider the guy wires associated with a tower installation. Wires similar in length to the radiating elements can seriously detract from a desired radiation pattern. A careful look may reveal that many compressive egg shell insulators may be used to break up the total length of the guys such that any single guy length cannot produce harmonically related radiation in concert with the actual antenna.
In summary, resonance can be a help or hindrance. Electrical resonance is a fundamental concept of electrical theory and, in practical terms, makes our radio endeavors possible.
This concludes the set up for the discussion of resonance. Are there any questions or comments?
This is N7KC for the Educational Radio Net
The impedance series is now history. During the course of that 13 week series we looked at several of the most fundamental ideas in the physics of electrical phenomenon and, hopefully, gained some practical knowledge of how these ideas link together to form a basis for our understanding of all things electrical. Let's exercise some of this earlier impedance series material and see how it can be applied to solve practical problems which are routinely encountered on the bench. The first study examined the potentiometer or "pot" and its behavior when used as a voltage divider. The second study examined how energy is moved from a source to a load and also considered the effect of a transmission line in this process. This third study will look at the phenomenon of resonance in both the mechanical and electrical worlds and extend the idea to antennas.
Lets consider mechanical systems first to get an intuitive feel for resonance.
We have all experienced autos which produce nasty sounds at certain speeds or engines where certain parts tend to vibrate depending on engine rpm. Suppose that we have an engine with some sort of attached bracket and the engine is at idle. If we very slowly advance the engine throttle to increase rpm’s there may be a engine rotational speed where the bracket starts to vibrate very strongly. If we continue to advance the throttle, the bracket vibration diminishes and disappears altogether. The mechanical configuration of the bracket has a "natural" frequency which is the frequency of vibration which develops when excited by the engines complicated sounds.
Another demonstration example is a wine goblet shattering when excited by acoustic energy which matches the natural frequency of the goblet. Finally, lets consider the pendulum in a clock. If the clock is unwound and the pendulum is activated we know that it will oscillate back and forth with diminishing amplitude until it stops. The pendulum has a natural frequency primarily determined by its length. If the clock is wound, however, there is a bit of clock mechanism which "taps" the pendulum very slightly at the correct moment to keep the pendulum swinging at constant amplitude, at the natural frequency, and for as long as the energy to produce the tap is present.
All of these examples show that very small forcing energies at the natural frequency of a mechanical system may cause dramatic vibration amplitudes due to resonance. Resonance frequency is where the forcing frequency matches the natural frequency of a system.
The situation in the electrical world is much the same as in the mechanical. Small signals (forcing energy) can appear dramatically larger due to electrical circuit resonance. Such circuits are always structured with resistance, inductance, and capacitive elements. The resistance element stands alone in being immune to the effects of forcing frequencies since the resistance converts energy directly to heat and stores nothing. In contrast to the resistance element, inductance and capacitive elements do not dissipate energy rather they store energy in the form of electric or magnetic fields during one portion of the cycle and return it to the circuit during the next.
Inductance associated with a forcing frequency creates a reactance product which increases with frequency whereas capacitance associated with a forcing frequency creates a reactance product which decreases with frequency. Therefore, given an assembly of resistance, inductive reactance, and capacitive reactance, there is a possibility that at some specific frequency the reactive components value will be equal and opposite hence cancel since they carry opposite sign. Electrical resonance generally indicates that net reactance is zero at a particular frequency. At this resonant frequency the circuit impedance is purely resistive.
One good example of electrical resonance is given by the tuning circuit in a typical radio receiver. The broadcast band, for instance, contains various amounts of energy from 550 Khz to 1500 Khz. The radio needs to respond to a specific station located in this continuum of signals. Using a parallel resonant circuit which is tunable allows one to slide across the band in search of the desired signal. Very slight amounts of received energy from the antenna will excite the resonant circuit and produce signal levels much higher than the excitation level. It is important to note that incoming signal energy is not increased by resonance rather signal amplitude is increased which is then amplified by a suitable active circuit.
Trapping circuits can be constructed from resistive, capacitive, and inductive elements as well. To facilitate this function the elements should be wired in series. At the resonant frequency the net reactance will be zero leaving only resistance as the circuit element. At frequencies off resonance the circuit impedance will always be larger than at resonance due to the combination of series resistance and predominate reactance.
Another example of impedance changing with frequency is the antenna. Lets consider a simple dipole cut to the center of any band. If you were to connect an antenna analyzer to the dipole and sweep from the lower to the upper band edge you would see the antennas feed point impedance, or combination of resistance and reactance, dip at the cut frequency and show only a resistive component. This is the radiation resistance of the antenna at resonance. Resonance frequency is that frequency where net reactance is zero.
Given that the antenna inductance and capacitance values are fixed, at frequencies above the resonance point the antenna is too long, inductive reactance increases, and the antenna impedance increases. Conversely, at frequencies below the resonance point the antenna is too short, capacitive reactance increases and the antenna impedance increases. Since maximum power transfer occurs when transmission line characteristic impedance matches the radiation resistance of the antenna, the trick is to adjust antenna elements such that, at the desired operating frequency, the net reactance is zero and maximum radio frequency current flows in the antenna elements. Since the dipole has a feed point impedance of about 72 ohms at resonance, driving it directly with 50 ohm coaxial line and a 1:1 balun would yield a VSWR of 72/50 or 1.44:1 minimum. Various matching schemes are available to adjust the feed point impedance to match the transmission line.
In certain circumstances electrical resonance can be a nuisance. For example, consider the guy wires associated with a tower installation. Wires similar in length to the radiating elements can seriously detract from a desired radiation pattern. A careful look may reveal that many compressive egg shell insulators may be used to break up the total length of the guys such that any single guy length cannot produce harmonically related radiation in concert with the actual antenna.
In summary, resonance can be a help or hindrance. Electrical resonance is a fundamental concept of electrical theory and, in practical terms, makes our radio endeavors possible.
This concludes the set up for the discussion of resonance. Are there any questions or comments?
This is N7KC for the Educational Radio Net
Tuesday, November 11, 2008
Log Periodic Dipole Antennas, Bob, Session 25
Tonight we will talk about another class of antennas, log periodic antennas. There are different forms of log periodic antennas but we will talk about the most common one, the Log Periodic Dipole Array (LPDA).
This antenna looks and acts similar to the Yagi but unlike the Yagi it covers a wide range of frequencies. This is the LPDA's defining characteristic. A typical design will cover a range of frequencies where the highest frequency is double the lowest. For example you could have one antenna that covered 14 MHz to 30 MHz with very good gain, front to back, and SWR figures over the entire range. You are not limited to the 2:1 frequency coverage. In fact you are only limited by the ability to physically construct the antenna and use it.
GENERAL DESCRIPTION
The log periodic antenna looks somewhat like a Yagi but, unlike the Yagi, the length of the parallel elements vary so that the tips form a straight line that gets progressively smaller. If you imagine lines running along the tips of both ends of the elements from the largest element to the smallest and extend the lines beyond the end of the antenna until they meet, they would form an angle with the boom as the bisector. The elements are connected in a criss-cross pattern so that, if you are looking down from the top of the antenna, the smallest element on the left side would be connected to the next larger element on the right side, and vice versa. This crisscrossing continues through all of the elements. The antenna is fed at the small end with a balanced signal.
BASIC THEORY
I found a greatly simplified explanation of how this antenna works at radio-electronics.com. The link is at the bottom of the blog post. Let's say we are feeding our antenna with a signal about in the middle of the range. Because of the crisscross arrangement most of the adjacent elements cancel each other. But at the two elements in the middle of the array, which are closest to resonant length, you also have the width between them such that the wave will be 180 degrees out of phase when it reaches the other element. That combined with the crisscross feed causes the two elements to reinforce each other.
One other point, the smaller elements which don't contribute to the radiation, act like the shorter director elements of a Yagi, while the longer elements act like reflectors. This creates a radiation pattern much like a Yagi.
As you tune up and down the usable frequency range, you find that at the higher frequencies the radiation primarily comes from the smaller elements and at the lower frequencies, the larger elements are the ones that radiate.
Of course it's never quite this simple. Depending on design you may have many of the elements contributing to the radiation.
DESIGN CONSIDERATIONS
Because the imaginary line along the tips is straight, and the extended lines on each side form an angle, there are some relationships that have to hold. Hopefully it is obvious to all that if you go twice as far away from where the lines meet (the vertex) then the length of the line going across (the element length) will be twice as much. This leads to the formula that the ratio of the length of successive elements has to equal the ratio of the distance from the vertex. This ratio is given the Greek letter tau. This ratio defines the relative distance between elements. In our example of doubling the distance, tau would equal 0.5. To make an effective LPDA you want to have a tau that is as close to 1.0 as is feasible. You can see that tau of 1 would result in parallel lines which wouldn't work. To cover the range you want of double the initial frequency you need a change of length that is actually more than double. If tau is very close to 1 then you will need many elements and a very long boom to achieve that. These are the trade-offs to building a LPDA.
BEYOND THE BASICS
There are ways to add true parasitic elements to the LPDA to improve performance. This is beyond the scope of this discussion and can be found in the Antenna Book.
As usual, I want to point you to the ARRL Antenna Book for an excellent in-depth discussion of building real world LPDA's. Also, as a bonus, you get a LPDA Design program for the PC when you buy the Antenna Book.
Log Perodic Antennas on radio-electronics.com
This antenna looks and acts similar to the Yagi but unlike the Yagi it covers a wide range of frequencies. This is the LPDA's defining characteristic. A typical design will cover a range of frequencies where the highest frequency is double the lowest. For example you could have one antenna that covered 14 MHz to 30 MHz with very good gain, front to back, and SWR figures over the entire range. You are not limited to the 2:1 frequency coverage. In fact you are only limited by the ability to physically construct the antenna and use it.
GENERAL DESCRIPTION
The log periodic antenna looks somewhat like a Yagi but, unlike the Yagi, the length of the parallel elements vary so that the tips form a straight line that gets progressively smaller. If you imagine lines running along the tips of both ends of the elements from the largest element to the smallest and extend the lines beyond the end of the antenna until they meet, they would form an angle with the boom as the bisector. The elements are connected in a criss-cross pattern so that, if you are looking down from the top of the antenna, the smallest element on the left side would be connected to the next larger element on the right side, and vice versa. This crisscrossing continues through all of the elements. The antenna is fed at the small end with a balanced signal.
BASIC THEORY
I found a greatly simplified explanation of how this antenna works at radio-electronics.com. The link is at the bottom of the blog post. Let's say we are feeding our antenna with a signal about in the middle of the range. Because of the crisscross arrangement most of the adjacent elements cancel each other. But at the two elements in the middle of the array, which are closest to resonant length, you also have the width between them such that the wave will be 180 degrees out of phase when it reaches the other element. That combined with the crisscross feed causes the two elements to reinforce each other.
One other point, the smaller elements which don't contribute to the radiation, act like the shorter director elements of a Yagi, while the longer elements act like reflectors. This creates a radiation pattern much like a Yagi.
As you tune up and down the usable frequency range, you find that at the higher frequencies the radiation primarily comes from the smaller elements and at the lower frequencies, the larger elements are the ones that radiate.
Of course it's never quite this simple. Depending on design you may have many of the elements contributing to the radiation.
DESIGN CONSIDERATIONS
Because the imaginary line along the tips is straight, and the extended lines on each side form an angle, there are some relationships that have to hold. Hopefully it is obvious to all that if you go twice as far away from where the lines meet (the vertex) then the length of the line going across (the element length) will be twice as much. This leads to the formula that the ratio of the length of successive elements has to equal the ratio of the distance from the vertex. This ratio is given the Greek letter tau. This ratio defines the relative distance between elements. In our example of doubling the distance, tau would equal 0.5. To make an effective LPDA you want to have a tau that is as close to 1.0 as is feasible. You can see that tau of 1 would result in parallel lines which wouldn't work. To cover the range you want of double the initial frequency you need a change of length that is actually more than double. If tau is very close to 1 then you will need many elements and a very long boom to achieve that. These are the trade-offs to building a LPDA.
BEYOND THE BASICS
There are ways to add true parasitic elements to the LPDA to improve performance. This is beyond the scope of this discussion and can be found in the Antenna Book.
As usual, I want to point you to the ARRL Antenna Book for an excellent in-depth discussion of building real world LPDA's. Also, as a bonus, you get a LPDA Design program for the PC when you buy the Antenna Book.
Log Perodic Antennas on radio-electronics.com
Wednesday, November 5, 2008
Impedance matching 101
November 5, 2008 Educational Radio Net, PSRG 24th session, Lee Bond N7KC
The impedance series is now history. During the course of that 13 week series we looked at several of the most fundamental ideas in the physics of electrical phenomenon and, hopefully, gained some practical knowledge of how these ideas link together to form a basis for our understanding of all things electrical. Let's exercise some of this earlier impedance series material and see how it can be applied to solve practical problems which are routinely encountered on the bench. The first study examined the potentiometer or "pot" and its behavior when used as a voltage divider. This second study will firstly examine how energy is moved from a source to a load and secondly, consider the effect of a transmission line in this process.
First, we need to understand a very elementary concept in describing mathematical plots. Imagine that we are walking along a straight path which starts to curve uphill. We notice that the walking is getting tougher as the path curves upward. We might make the observation that this is a steep upward slope. As we continue our walk along the path, it levels out and immediately starts to slope downward and we must hold back to avoid running. We might make the observation that this is a steep downward slope. Looking back on our route we see that the high point on the walk was at the highest point on the hill and, further, that the slope was actually zero at that point. So it is with graphical plots. A maximum point (or minimum for that matter) on a graph always occurs at exactly zero slope. If you are skilled with your math then it is an easy matter to set the slope to zero and determine the conditions which will then cause the maximum or minimum on the plot.
It seems to be common knowledge that one must match the antenna impedance to the transmission line to transfer maximum energy per unit time (power) across the connection. What is not so widely known is that we can use a resistive voltage divider to demonstrate the idea directly and with ease.
Let’s set up a demonstration circuit to test the idea. We will set a powerful oscillator to a frequency to 10 Mhz and adjust the output voltage to 100 volts rms. Consider this to be a "perfect" voltage source with zero internal impedance. This means that our oscillator will stubbornly maintain the 100 vrms at its output without regard to load. Now, let’s convert this oscillator to a real world device by adding 50 ohms to the output. This is the equivalent to your radio transmitter which has a 50 ohm output.
Next, we have a large carbon resistor and we can change the resistance value from zero ohms to 200 ohms by merely turning a calibrated knob. Since we suspect that heating of the resistor might be an interesting thing to watch let’s attach a thermometer to the resistor to see how its temperature changes during the demonstration.
Finally, attach the carbon load resistor to the 50 ohm output of our demonstration oscillator to complete the circuit. Let’s also attach an RF voltmeter to the load resistor so that we can log some numbers during the demo process. (see spreadsheet data and plot at end of this article)
So, we are ready to start the test and take some data. To make our point and to keep this short we will just do 3 measurements so set the load resistor to 30 ohms and we notice that the voltage across the load resistor is 37.5 volts. We know from Joule’s Law that power is just voltage squared divided by the resistance so the power (energy per time) dissipated in the load is 46.88 watts or 46.88 joules per second. Checking the thermometer we see that it has moved upscale from room temperature to level 1.
Next, set the load resistor to 50 ohms and we notice that the voltage across the load resistor is 50 volts. Applying Joule’s Law once again we see the dissipated power to be 50 watts or 50 joules per second. The thermometer is now reading higher than level 1 from the first measurement.
Finally, set the load resistor to 80 ohms and we notice that the voltage across the load resistor is 61.5 volts. Applying Joule’s Law once again we see the dissipated power to be 47.34 watts or 47.34 joules per second. The thermometer is now reading very close to level 1 from the first measurement.
Taking a look at our data we see that the load resistor temperature was highest at 50 ohms and dropped off either side of 50. If we were to take multiple data points and plot them on graph paper, such that the vertical axis, the ordinate, represented power and the horizontal axis, the abscissa, represented values of load resistance then we would show a "hill" much like the hiking hill we traversed earlier. The maximum value would occur at zero slope (top of the hill), when the "source" resistance of the oscillator equaled the load resistance. By extension, resistance can be replaced with impedance and you will obtain exactly the same results.
We have taken the graphical approach here but mathematically this is a very clean problem. One simply writes the equation for power dissipated in the load in terms of the simple voltage division associated with the source and load impedance. Compute the slope, set it to zero, and notice that both source and load impedance must be equal to achieve zero slope.
Fine you say but what happens when I separate the source impedance and load impedance with a transmission line? Now things start to become very interesting. If, instead of RF energy, we had used DC then the transmission line could be a simple wire to complete the circuit. I deliberately used RF in the demonstration example to make a point about transmission lines. Although transmission lines are tagged with an impedance value they are not resistors. If you were to connect an ohmmeter across an open 50 ohm transmission line the meter would indicate an open circuit. If you connected the same meter across a 50 ohm resistor then the meter would read exactly 50 ohms. The solitary mission of a resistor is to convert electrical energy to heat. The mission of a transmission line is to transfer energy from point A to point B with minimum loss of energy. For example, the 50 ohm transmitter provides the energy, the 50 ohm transmission line directs the energy with intended minimum loss, and the 50 ohm load consumes the energy. So, what is going on here when the line is in play?
To answer this question we need to understand that transmission lines, coaxial cables for example, have distributed inductance and capacitance throughout. The, so called, characteristic impedance of a transmission line is defined by the square root of the ratio of the distributed inductance over the distributed capacitance of a tiny cross section of the line and is resistive. Our 50 ohm line simply scales the line current such that the ratio of line voltage to line current is 50 and no energy is lost in the process. However, there are two primary energy losses in a transmission line which we need to address. Skin effect currents flowing in the copper conductors and dielectric absorption losses in the insulating material between conductors produces heat loss. Both losses vary as a function of frequency.
There are three cases which we need to consider for the transmission line between the source and load in the demonstration example.
First, consider the infinitely long 50 ohm transmission line. Energy entering and moving down the line is constantly reduced by the skin effect losses and dielectric losses and, eventually, this energy is reduced to zero. Input energy is totally dispersed nearest the input end of the infinite length line. Our demo system behaves as if the line were a 50 ohm resistor and maximum energy is transferred from the source. No useful work has been done.
Secondly, consider a very much shorter 50 ohm line which is terminated with a 50 ohm resistor. This line is so short that skin effect losses, and others, are small so that almost the entire input energy is converted to heat in the load resistor. The 50 ohm termination matches the line so there is no impedance discontinuity and there is no reflected energy. The line and load are matched and maximum power is transferred by the line to the load. Maximum useful work has been done.
Thirdly, using the same short line as above we arrange for the load to be something other than a 50 ohm resistor. Perhaps a 60 ohm resistor is now the load. When the incident energy first encounters the 50 ohm line the characteristic impedance of the line scales the current appropriately for the 50 ohms. Then, at some later time, the traveling energy encounters the 60 ohm termination. Obviously there is now an impedance mismatch and some small fraction of the incident energy is reflected back toward the generating end. Given these circumstances, with the reflected energy in play, it is clear that maximum energy transfer can never be achieved. Less than maximum useful work has been done.
In summary, one can show either graphically or mathematically that maximum energy is transferred when the source impedance equals, or matches, the load impedance. Transmission lines do not dissipate energy as do resistors. The entire system must be matched to realize maximum energy transfer. Transmission line to load mismatches cause energy reflections which always reduce system throughput and degrade performance.
This concludes the set up for the discussion of impedance matching. Are there any questions or comments?
This image is a scan of spreadsheet data and associated plot for the demonstration circuit described above.
Double click the image to see a larger version.
This is N7KC for the Wednesday night Educational Radio Net.
The impedance series is now history. During the course of that 13 week series we looked at several of the most fundamental ideas in the physics of electrical phenomenon and, hopefully, gained some practical knowledge of how these ideas link together to form a basis for our understanding of all things electrical. Let's exercise some of this earlier impedance series material and see how it can be applied to solve practical problems which are routinely encountered on the bench. The first study examined the potentiometer or "pot" and its behavior when used as a voltage divider. This second study will firstly examine how energy is moved from a source to a load and secondly, consider the effect of a transmission line in this process.
First, we need to understand a very elementary concept in describing mathematical plots. Imagine that we are walking along a straight path which starts to curve uphill. We notice that the walking is getting tougher as the path curves upward. We might make the observation that this is a steep upward slope. As we continue our walk along the path, it levels out and immediately starts to slope downward and we must hold back to avoid running. We might make the observation that this is a steep downward slope. Looking back on our route we see that the high point on the walk was at the highest point on the hill and, further, that the slope was actually zero at that point. So it is with graphical plots. A maximum point (or minimum for that matter) on a graph always occurs at exactly zero slope. If you are skilled with your math then it is an easy matter to set the slope to zero and determine the conditions which will then cause the maximum or minimum on the plot.
It seems to be common knowledge that one must match the antenna impedance to the transmission line to transfer maximum energy per unit time (power) across the connection. What is not so widely known is that we can use a resistive voltage divider to demonstrate the idea directly and with ease.
Let’s set up a demonstration circuit to test the idea. We will set a powerful oscillator to a frequency to 10 Mhz and adjust the output voltage to 100 volts rms. Consider this to be a "perfect" voltage source with zero internal impedance. This means that our oscillator will stubbornly maintain the 100 vrms at its output without regard to load. Now, let’s convert this oscillator to a real world device by adding 50 ohms to the output. This is the equivalent to your radio transmitter which has a 50 ohm output.
Next, we have a large carbon resistor and we can change the resistance value from zero ohms to 200 ohms by merely turning a calibrated knob. Since we suspect that heating of the resistor might be an interesting thing to watch let’s attach a thermometer to the resistor to see how its temperature changes during the demonstration.
Finally, attach the carbon load resistor to the 50 ohm output of our demonstration oscillator to complete the circuit. Let’s also attach an RF voltmeter to the load resistor so that we can log some numbers during the demo process. (see spreadsheet data and plot at end of this article)
So, we are ready to start the test and take some data. To make our point and to keep this short we will just do 3 measurements so set the load resistor to 30 ohms and we notice that the voltage across the load resistor is 37.5 volts. We know from Joule’s Law that power is just voltage squared divided by the resistance so the power (energy per time) dissipated in the load is 46.88 watts or 46.88 joules per second. Checking the thermometer we see that it has moved upscale from room temperature to level 1.
Next, set the load resistor to 50 ohms and we notice that the voltage across the load resistor is 50 volts. Applying Joule’s Law once again we see the dissipated power to be 50 watts or 50 joules per second. The thermometer is now reading higher than level 1 from the first measurement.
Finally, set the load resistor to 80 ohms and we notice that the voltage across the load resistor is 61.5 volts. Applying Joule’s Law once again we see the dissipated power to be 47.34 watts or 47.34 joules per second. The thermometer is now reading very close to level 1 from the first measurement.
Taking a look at our data we see that the load resistor temperature was highest at 50 ohms and dropped off either side of 50. If we were to take multiple data points and plot them on graph paper, such that the vertical axis, the ordinate, represented power and the horizontal axis, the abscissa, represented values of load resistance then we would show a "hill" much like the hiking hill we traversed earlier. The maximum value would occur at zero slope (top of the hill), when the "source" resistance of the oscillator equaled the load resistance. By extension, resistance can be replaced with impedance and you will obtain exactly the same results.
We have taken the graphical approach here but mathematically this is a very clean problem. One simply writes the equation for power dissipated in the load in terms of the simple voltage division associated with the source and load impedance. Compute the slope, set it to zero, and notice that both source and load impedance must be equal to achieve zero slope.
Fine you say but what happens when I separate the source impedance and load impedance with a transmission line? Now things start to become very interesting. If, instead of RF energy, we had used DC then the transmission line could be a simple wire to complete the circuit. I deliberately used RF in the demonstration example to make a point about transmission lines. Although transmission lines are tagged with an impedance value they are not resistors. If you were to connect an ohmmeter across an open 50 ohm transmission line the meter would indicate an open circuit. If you connected the same meter across a 50 ohm resistor then the meter would read exactly 50 ohms. The solitary mission of a resistor is to convert electrical energy to heat. The mission of a transmission line is to transfer energy from point A to point B with minimum loss of energy. For example, the 50 ohm transmitter provides the energy, the 50 ohm transmission line directs the energy with intended minimum loss, and the 50 ohm load consumes the energy. So, what is going on here when the line is in play?
To answer this question we need to understand that transmission lines, coaxial cables for example, have distributed inductance and capacitance throughout. The, so called, characteristic impedance of a transmission line is defined by the square root of the ratio of the distributed inductance over the distributed capacitance of a tiny cross section of the line and is resistive. Our 50 ohm line simply scales the line current such that the ratio of line voltage to line current is 50 and no energy is lost in the process. However, there are two primary energy losses in a transmission line which we need to address. Skin effect currents flowing in the copper conductors and dielectric absorption losses in the insulating material between conductors produces heat loss. Both losses vary as a function of frequency.
There are three cases which we need to consider for the transmission line between the source and load in the demonstration example.
First, consider the infinitely long 50 ohm transmission line. Energy entering and moving down the line is constantly reduced by the skin effect losses and dielectric losses and, eventually, this energy is reduced to zero. Input energy is totally dispersed nearest the input end of the infinite length line. Our demo system behaves as if the line were a 50 ohm resistor and maximum energy is transferred from the source. No useful work has been done.
Secondly, consider a very much shorter 50 ohm line which is terminated with a 50 ohm resistor. This line is so short that skin effect losses, and others, are small so that almost the entire input energy is converted to heat in the load resistor. The 50 ohm termination matches the line so there is no impedance discontinuity and there is no reflected energy. The line and load are matched and maximum power is transferred by the line to the load. Maximum useful work has been done.
Thirdly, using the same short line as above we arrange for the load to be something other than a 50 ohm resistor. Perhaps a 60 ohm resistor is now the load. When the incident energy first encounters the 50 ohm line the characteristic impedance of the line scales the current appropriately for the 50 ohms. Then, at some later time, the traveling energy encounters the 60 ohm termination. Obviously there is now an impedance mismatch and some small fraction of the incident energy is reflected back toward the generating end. Given these circumstances, with the reflected energy in play, it is clear that maximum energy transfer can never be achieved. Less than maximum useful work has been done.
In summary, one can show either graphically or mathematically that maximum energy is transferred when the source impedance equals, or matches, the load impedance. Transmission lines do not dissipate energy as do resistors. The entire system must be matched to realize maximum energy transfer. Transmission line to load mismatches cause energy reflections which always reduce system throughput and degrade performance.
This concludes the set up for the discussion of impedance matching. Are there any questions or comments?
This image is a scan of spreadsheet data and associated plot for the demonstration circuit described above.
Double click the image to see a larger version.
This is N7KC for the Wednesday night Educational Radio Net.
Monday, October 27, 2008
Antennas: The Yagi, Bob, Week 23
Tonight we will cover another of the basic ham antennas, the Yagi. This is the most popular rotatable antenna as it is a good compromise between, cost, durability, manageability and performance.
Let me start by saying I fully expected to get a neat simple explanation of the theory of Yagis from the ARRL Antenna Book but it is not there. This seems to be one of those black magic designs that just work. Don't misunderstand, this can be modeled by the popular computer programs and you can see how it works but I didn't find any simple explanation of why it works. With that said, let's dive in anyway and at least describe it and what it does, along with some of the compromises in design.
It consists of a horizontal boom with two or more horizontal elements that are perpendicular to the boom. I believe most of you have seen several Yagis by now so I won't go too much into the appearance. The two necessary elements are the driven element, which is essentially a horizontal dipole like the kind we have covered previously, and the reflector. The reflector, as you might guess is placed "behind" the driven element, that is to say, the radiation of the antenna is primarily in the direction opposite the reflector. A Yagi has only one reflector. Any more elements after the driven element and the reflector are directors and are on the opposite side of the driven element from the reflector. So, going from back to front the elements are: reflector, driven element, director, director, etc.
Early designs of Yagis had all of the elements equally spaced at around 0.15 wavelengths between each one. Optimal designs now have the reflector, driven element and first director more closely spaced (about 0.1 wavelength or less) and the directors spaced farther apart. So, lets look at what we mean by an optimal design. First, what are the design trade-offs of a Yagi?
THE PERFECT YAGI
Even though we wouldn't all agree on what the perfect Yagi was we can agree on three things we would want:
REAL WORLD YAGIS
If we design for a maximum gain antenna what we get is an antenna that has a very narrow range of frequencies with a usable SWR. The ARRL Antenna Book has a nice set of graphs showing this which I will use to share some numbers. The example I've chosen is for a 10 Meter, 3 element Yagi. For those that have the book it is on page 11-5. The table below shows three Yagi designs: maximum gain antenna, the maximum gain per SWR antenna and the optimal antenna.
From this table you can see that you get a modest improvement in gain for the maximum gain design but at a cost of both SWR and the Front to Back gain ratio. The Gain per SWR gives you good SWR across the band and better gain than the optimized but at a cost of decreased Front to Back gain ratio. The optimized Yagi design sacrifices a bit of overall gain but gives you a good SWR across the band and a consistently good Front to Back ratio as well.
There are design considerations for adding more directors as well but I will leave that for another time. This is more specifically for VHF/UHF antennas and we may have a session just on that.
The ARRL Antenna Book rates two element Yagis well and indicates that the increased gain drops off as you start adding directors.
EZNEC Antenna Software by W7EL
Here is the promised link to get the EZNEC antenna modeling software. I have the free version now which limits you to 20 segments. Each wire should have several segments to allow for accurate modeling so 20 segments won't go very far on a multiple element antenna. I will probably end up buying the full version which is $89 for a web purchase and direct download or $99 to get the CD.
Let me start by saying I fully expected to get a neat simple explanation of the theory of Yagis from the ARRL Antenna Book but it is not there. This seems to be one of those black magic designs that just work. Don't misunderstand, this can be modeled by the popular computer programs and you can see how it works but I didn't find any simple explanation of why it works. With that said, let's dive in anyway and at least describe it and what it does, along with some of the compromises in design.
It consists of a horizontal boom with two or more horizontal elements that are perpendicular to the boom. I believe most of you have seen several Yagis by now so I won't go too much into the appearance. The two necessary elements are the driven element, which is essentially a horizontal dipole like the kind we have covered previously, and the reflector. The reflector, as you might guess is placed "behind" the driven element, that is to say, the radiation of the antenna is primarily in the direction opposite the reflector. A Yagi has only one reflector. Any more elements after the driven element and the reflector are directors and are on the opposite side of the driven element from the reflector. So, going from back to front the elements are: reflector, driven element, director, director, etc.
Early designs of Yagis had all of the elements equally spaced at around 0.15 wavelengths between each one. Optimal designs now have the reflector, driven element and first director more closely spaced (about 0.1 wavelength or less) and the directors spaced farther apart. So, lets look at what we mean by an optimal design. First, what are the design trade-offs of a Yagi?
THE PERFECT YAGI
Even though we wouldn't all agree on what the perfect Yagi was we can agree on three things we would want:
- 50 Ohm Impedance at the feedpoint; pure resistive (no reactance)
- Zero gain at the back and sides
- Maximum possible gain at the front
REAL WORLD YAGIS
If we design for a maximum gain antenna what we get is an antenna that has a very narrow range of frequencies with a usable SWR. The ARRL Antenna Book has a nice set of graphs showing this which I will use to share some numbers. The example I've chosen is for a 10 Meter, 3 element Yagi. For those that have the book it is on page 11-5. The table below shows three Yagi designs: maximum gain antenna, the maximum gain per SWR antenna and the optimal antenna.
Value | Max Gain Design | Max Gain per SWR Design | Optimized Design |
---|---|---|---|
SWR at 28.4 MHz | 2 | 2 | 2 |
SWR at 28.0 MHz | 7 | 2 | 2 |
SWR at 28.8 MHz | 10 | 2 | 2.2 |
Gain at 28.4 MHz | 8.4 | 7.6 | 7.2 |
Gain at 28.0 MHz | 7.9 | 7.5 | 7.1 |
Gain at 28.8 MHz | 8.2 | 7.8 | 7.4 |
F/R at 28.4 MHz | 13 | 22 | 22 |
F/R at 28.0 MHz | 20 | 15 | 20 |
F/R at 28.8 MHz | 6 | 18 | 23 |
From this table you can see that you get a modest improvement in gain for the maximum gain design but at a cost of both SWR and the Front to Back gain ratio. The Gain per SWR gives you good SWR across the band and better gain than the optimized but at a cost of decreased Front to Back gain ratio. The optimized Yagi design sacrifices a bit of overall gain but gives you a good SWR across the band and a consistently good Front to Back ratio as well.
There are design considerations for adding more directors as well but I will leave that for another time. This is more specifically for VHF/UHF antennas and we may have a session just on that.
The ARRL Antenna Book rates two element Yagis well and indicates that the increased gain drops off as you start adding directors.
EZNEC Antenna Software by W7EL
Here is the promised link to get the EZNEC antenna modeling software. I have the free version now which limits you to 20 segments. Each wire should have several segments to allow for accurate modeling so 20 segments won't go very far on a multiple element antenna. I will probably end up buying the full version which is $89 for a web purchase and direct download or $99 to get the CD.
Wednesday, October 22, 2008
Potentiometers 101
October 22, 2008 Educational Radio Net, PSRG 22nd session, Lee Bond N7KC
The impedance series is now history. During the course of 13 weeks we looked at several of the most fundamental ideas in the physics of electrical phenomenon and, hopefully, gained some practical knowledge of how these ideas link together to form a basis for our understanding of all things electrical. Let's exercise some of this earlier impedance series material and see how it can be applied to solve practical problems which are routinely encountered on the bench. My choice for the first study is the potentiometer or "pot" in the vernacular.
There is one wee problem with the word potentiometer which we must clear up before proceeding... there are two devices which share the same name but perform different duties in the electrical world. In early laboratories one could find a very elegant device, with many knobs, generally in a nicely crafted wooden box, and which was used to measure electrical potential differences with great accuracy. Today this function is performed by sophisticated digital voltmeters and one rarely sees the older instrument except in museums. The potentiometer that we will study is the familiar device commonly found on the front panels of our radios, which can be rotated to produce some desired action.
These devices are everywhere. Virtually all "level" controls such as audio volume, AGC, squelch, power supply voltage output, and many more are based on the lowly pot so a good grasp of the underlying operational details is a must for your bag of tricks. We all know what a pot looks like physically. It is a resistive device which has 3 contact points. Basically each end of the resistive "element" is attached to one of the points. The remaining contact point, generally known as the "wiper", connects to a sliding assembly which is controlled by some knob or motor, and which makes a mechanically movable contact which is adjustable from one end of the resistive element to the other.
The resistive element proper was carbon in the early days of this device but modern materials have largely replaced carbon. More common today is the very robust cermet element and the even more robust wire wound element. The carbon element tended to abrade as the wiper slid along its surface and they would become "scratchy" and very annoying. Cermet has much less tendency to abrade and also offers what is called infinite resolution. In contrast is the wire wound pot which may or may not offer infinite resolution depending on construction. If the resistance element is just a length of resistance wire formed in a circle then the resolution would be deemed infinite since the slider can find any point on the wire. If the wire resistance element is a helically wound structure which is then formed in a circle then the wiper can only contact discrete points along the main wire and the pot cannot be set infinitely fine.
The most common pot is structured with a linear taper meaning that doubling the angle of rotation will double the resistance from the wiper with respect to a designated end of the element. Additionally, there log taper pots where the resistance changes logarithmically with rotation angle. The log class includes the audio taper pot which produces a uniform change of loudness to your ear with uniform shaft rotation if used as a volume control. Variations here are log clockwise or counter clockwise.
There is a class of very high precision multiturn wire wound potentiometer devices called Helipots by Beckman. Bourns and others offer similar devices. These offer 5 turn, 10 turn, and 20 turn rotations so the total resistance can be controlled over as much as 7200 degrees of rotation. The linearity of these devices as deviations from a straight line are specified and they all offer extraordinary precision. Generally used with turns counting dials.
One last point concerning the use of wire wound pots is in order. In addition to the desired resistance mechanism there is an added component of inductance present. If the element is helically wound then the inductive component is much larger than that encountered with the simple wire element. Inductance/reactance effects limit the use of these sort of pots in AC circuits hence they are more commonly found in DC circuits.
Ok, let's build a circuit. We will need a power supply of some sort so how about using a 10 volt battery. 10 volts will be convenient for our discussion even though a 10 volt battery would be an oddity for sure. Then we need a pot to work with. Lets choose a simple 1000 ohm carbon unit rated at 2 watts and which is structured as a linear device. That's it for our circuit parts... just a battery and a pot. Lets connect the battery and pot in series in the following manner. Pot contacts are normally labeled 1, 2, and 3. Contact 1 is commonly the low potential reference so it will go to the battery negative terminal. Contact 2 is always the wiper and common convention states that the wiper, contact 2, moves toward the contact 3 end with clockwise rotation of the knob. So, to complete our circuit we connect contact 3 to the battery positive terminal.
Now we need a measuring device so let's choose a VOM as in Volt-Ohm-Milliamp meter. In fact we will need two of these meters so let's use the common Simpson 260 VOM. We want to measure the series current flowing in our circuit so disconnect the pot contact 3 from the battery positive and insert, in series, one of the VOM's with positive lead going to the battery positive and the negative (common) lead going to contact 3 on the pot. From Ohm's Law we expect the series current to be 10 volts divided by 1000 ohms and, sure enough, the series meter shows the current to be 0.01 amperes or 10 milliamperes. (Note: let me assert that we are using a "perfect" meter here... one that does not influence the circuit being measured. In real life no such device exists and all measuring instruments change the circuit to some degree. The effect is commonly described as "loading".)
From earlier discussions we know that 1 ampere is defined as 1 coulomb of charge per second past a given point so the 0.01 ampere represents 0.01 coulombs per second flowing in our circuit. Also from earlier discussions we know that you cannot impress any voltage on a resistor without the resistor becoming warmer than it's surrounding environment. The job of a resistor is to convert the energy of moving electrical charge to heat energy. Remember Joule's Law? The power calculation for our little circuit is voltage squared divided by resistance or 0.1 watt. From earlier discussions we know that 1 watt is one joule per second so we conclude that 100 milli-joules of electrical energy per second is being converted to heat in our pot resistive element and a sensitive thermometer would show some upscale movement.
Now, adjust the pot shaft fully clockwise, and let's add the second meter as a voltmeter and place the meter negative lead on the battery negative and the meter positive lead on contact 2 of the pot. Fully clockwise moves contact 2 to contact 3 and we see 10 volts on the meter as you would expect since both are in contact with the battery positive terminal. Now rotate the shaft to mid rotation, half way between rotational extremes, and notice that the meter indicates 5 volts or 1/2 of the previous initial reading. Moving the shaft again such that the wiper moves toward contact 1 shows that voltage goes toward zero whereas moving from midpoint toward contact 3 shows the voltage going toward maximum.
Now consider the shaft at mid position where we measured 5 volts on the meter. Since the pot is linear the mid position resistance should be 1/2 of the 1000 ohms or 500 ohms. At mid point we would expect 1/2 of the total power to be dissipated above the wiper position and 1/2 dissipated below. At mid point we measure 5 volts and we know that the resistance is 500 ohms. So, 5 squared divided by 500 from Joule's Law gives 0.05 watts which is 1/2 of the total 0.1 watts dissipation.
The idea of "voltage drop" follows directly from the lesser amount of energy dissipated as the wiper approaches the reference terminal or contact 1 on the pot. By extension, one could partition the resistor element into 10 equal sections and then argue that the total must be the sum of the parts so each part would then dissipate 0.01 watts. Each partition would then "drop" 1 volt over 100 ohms which computes to 0.01 watts.
My point here is to show that, yes, one can talk glibly about voltage drops around a resistive circuit, but the underlying principle is directly related to energy conversion to heat. Resistors always throw something away but the wiper on a pot allows you to choose at what level you want to save. Basically a pot is an attenuator. The output signal will never be larger than the input because of the energy conversion into heat phenomenon.
The pot is a simple voltage divider and the output voltage can be easily calculated. The fraction of the tapped off resistance divided by the total resistance times the input will yield the output voltage. For example, using our 1000 ohm pot, if the tapped resistance is 133 ohms and the input voltage were 8.5 volts then the output voltage is 133 ohms divided by 1000 ohms times 8.5 volts or 1.1305 volts. Conversely, if one knows the output voltage and the tap ratio then computing the input voltage is a piece of cake. In like fashion, if you know the input voltage and desired output voltage then calculating the tap point is one more piece of cake.
One last point... sometimes you will see a pot symbol wired with terminals 1 and 2 or 2 and 3 connected together. In this case the pot is wired as a rheostat and is nothing more than a variable resistor. It is not possible to voltage divide with a single rheostat. You must have at least two units to achieve voltage division.
In summary, all resistors dissipate energy and will be measurably warmer than their environment. Voltage drops are a direct consequence of energy dissipation in a resistive element. Kirchoff's voltage law which states that the algebraic sum of the voltages in a closed loop is zero is simply a restatement of conservation of energy where total energy converted to heat equals total input energy. Given that work and energy are identical it follows that work in equals work out hence the net work is zero.
This concludes the set up for the discussion of potentiometers or pots. Are there any questions or comments?
Something to ponder: Two atoms are leaving a bar when one says to the other "I left my electrons in the bar". The other says to the first "are you sure?" The first replies "I am positive".
This is N7KC for the Wednesday night Educational Radio Net.
The impedance series is now history. During the course of 13 weeks we looked at several of the most fundamental ideas in the physics of electrical phenomenon and, hopefully, gained some practical knowledge of how these ideas link together to form a basis for our understanding of all things electrical. Let's exercise some of this earlier impedance series material and see how it can be applied to solve practical problems which are routinely encountered on the bench. My choice for the first study is the potentiometer or "pot" in the vernacular.
There is one wee problem with the word potentiometer which we must clear up before proceeding... there are two devices which share the same name but perform different duties in the electrical world. In early laboratories one could find a very elegant device, with many knobs, generally in a nicely crafted wooden box, and which was used to measure electrical potential differences with great accuracy. Today this function is performed by sophisticated digital voltmeters and one rarely sees the older instrument except in museums. The potentiometer that we will study is the familiar device commonly found on the front panels of our radios, which can be rotated to produce some desired action.
These devices are everywhere. Virtually all "level" controls such as audio volume, AGC, squelch, power supply voltage output, and many more are based on the lowly pot so a good grasp of the underlying operational details is a must for your bag of tricks. We all know what a pot looks like physically. It is a resistive device which has 3 contact points. Basically each end of the resistive "element" is attached to one of the points. The remaining contact point, generally known as the "wiper", connects to a sliding assembly which is controlled by some knob or motor, and which makes a mechanically movable contact which is adjustable from one end of the resistive element to the other.
The resistive element proper was carbon in the early days of this device but modern materials have largely replaced carbon. More common today is the very robust cermet element and the even more robust wire wound element. The carbon element tended to abrade as the wiper slid along its surface and they would become "scratchy" and very annoying. Cermet has much less tendency to abrade and also offers what is called infinite resolution. In contrast is the wire wound pot which may or may not offer infinite resolution depending on construction. If the resistance element is just a length of resistance wire formed in a circle then the resolution would be deemed infinite since the slider can find any point on the wire. If the wire resistance element is a helically wound structure which is then formed in a circle then the wiper can only contact discrete points along the main wire and the pot cannot be set infinitely fine.
The most common pot is structured with a linear taper meaning that doubling the angle of rotation will double the resistance from the wiper with respect to a designated end of the element. Additionally, there log taper pots where the resistance changes logarithmically with rotation angle. The log class includes the audio taper pot which produces a uniform change of loudness to your ear with uniform shaft rotation if used as a volume control. Variations here are log clockwise or counter clockwise.
There is a class of very high precision multiturn wire wound potentiometer devices called Helipots by Beckman. Bourns and others offer similar devices. These offer 5 turn, 10 turn, and 20 turn rotations so the total resistance can be controlled over as much as 7200 degrees of rotation. The linearity of these devices as deviations from a straight line are specified and they all offer extraordinary precision. Generally used with turns counting dials.
One last point concerning the use of wire wound pots is in order. In addition to the desired resistance mechanism there is an added component of inductance present. If the element is helically wound then the inductive component is much larger than that encountered with the simple wire element. Inductance/reactance effects limit the use of these sort of pots in AC circuits hence they are more commonly found in DC circuits.
Ok, let's build a circuit. We will need a power supply of some sort so how about using a 10 volt battery. 10 volts will be convenient for our discussion even though a 10 volt battery would be an oddity for sure. Then we need a pot to work with. Lets choose a simple 1000 ohm carbon unit rated at 2 watts and which is structured as a linear device. That's it for our circuit parts... just a battery and a pot. Lets connect the battery and pot in series in the following manner. Pot contacts are normally labeled 1, 2, and 3. Contact 1 is commonly the low potential reference so it will go to the battery negative terminal. Contact 2 is always the wiper and common convention states that the wiper, contact 2, moves toward the contact 3 end with clockwise rotation of the knob. So, to complete our circuit we connect contact 3 to the battery positive terminal.
Now we need a measuring device so let's choose a VOM as in Volt-Ohm-Milliamp meter. In fact we will need two of these meters so let's use the common Simpson 260 VOM. We want to measure the series current flowing in our circuit so disconnect the pot contact 3 from the battery positive and insert, in series, one of the VOM's with positive lead going to the battery positive and the negative (common) lead going to contact 3 on the pot. From Ohm's Law we expect the series current to be 10 volts divided by 1000 ohms and, sure enough, the series meter shows the current to be 0.01 amperes or 10 milliamperes. (Note: let me assert that we are using a "perfect" meter here... one that does not influence the circuit being measured. In real life no such device exists and all measuring instruments change the circuit to some degree. The effect is commonly described as "loading".)
From earlier discussions we know that 1 ampere is defined as 1 coulomb of charge per second past a given point so the 0.01 ampere represents 0.01 coulombs per second flowing in our circuit. Also from earlier discussions we know that you cannot impress any voltage on a resistor without the resistor becoming warmer than it's surrounding environment. The job of a resistor is to convert the energy of moving electrical charge to heat energy. Remember Joule's Law? The power calculation for our little circuit is voltage squared divided by resistance or 0.1 watt. From earlier discussions we know that 1 watt is one joule per second so we conclude that 100 milli-joules of electrical energy per second is being converted to heat in our pot resistive element and a sensitive thermometer would show some upscale movement.
Now, adjust the pot shaft fully clockwise, and let's add the second meter as a voltmeter and place the meter negative lead on the battery negative and the meter positive lead on contact 2 of the pot. Fully clockwise moves contact 2 to contact 3 and we see 10 volts on the meter as you would expect since both are in contact with the battery positive terminal. Now rotate the shaft to mid rotation, half way between rotational extremes, and notice that the meter indicates 5 volts or 1/2 of the previous initial reading. Moving the shaft again such that the wiper moves toward contact 1 shows that voltage goes toward zero whereas moving from midpoint toward contact 3 shows the voltage going toward maximum.
Now consider the shaft at mid position where we measured 5 volts on the meter. Since the pot is linear the mid position resistance should be 1/2 of the 1000 ohms or 500 ohms. At mid point we would expect 1/2 of the total power to be dissipated above the wiper position and 1/2 dissipated below. At mid point we measure 5 volts and we know that the resistance is 500 ohms. So, 5 squared divided by 500 from Joule's Law gives 0.05 watts which is 1/2 of the total 0.1 watts dissipation.
The idea of "voltage drop" follows directly from the lesser amount of energy dissipated as the wiper approaches the reference terminal or contact 1 on the pot. By extension, one could partition the resistor element into 10 equal sections and then argue that the total must be the sum of the parts so each part would then dissipate 0.01 watts. Each partition would then "drop" 1 volt over 100 ohms which computes to 0.01 watts.
My point here is to show that, yes, one can talk glibly about voltage drops around a resistive circuit, but the underlying principle is directly related to energy conversion to heat. Resistors always throw something away but the wiper on a pot allows you to choose at what level you want to save. Basically a pot is an attenuator. The output signal will never be larger than the input because of the energy conversion into heat phenomenon.
The pot is a simple voltage divider and the output voltage can be easily calculated. The fraction of the tapped off resistance divided by the total resistance times the input will yield the output voltage. For example, using our 1000 ohm pot, if the tapped resistance is 133 ohms and the input voltage were 8.5 volts then the output voltage is 133 ohms divided by 1000 ohms times 8.5 volts or 1.1305 volts. Conversely, if one knows the output voltage and the tap ratio then computing the input voltage is a piece of cake. In like fashion, if you know the input voltage and desired output voltage then calculating the tap point is one more piece of cake.
One last point... sometimes you will see a pot symbol wired with terminals 1 and 2 or 2 and 3 connected together. In this case the pot is wired as a rheostat and is nothing more than a variable resistor. It is not possible to voltage divide with a single rheostat. You must have at least two units to achieve voltage division.
In summary, all resistors dissipate energy and will be measurably warmer than their environment. Voltage drops are a direct consequence of energy dissipation in a resistive element. Kirchoff's voltage law which states that the algebraic sum of the voltages in a closed loop is zero is simply a restatement of conservation of energy where total energy converted to heat equals total input energy. Given that work and energy are identical it follows that work in equals work out hence the net work is zero.
This concludes the set up for the discussion of potentiometers or pots. Are there any questions or comments?
Something to ponder: Two atoms are leaving a bar when one says to the other "I left my electrons in the bar". The other says to the first "are you sure?" The first replies "I am positive".
This is N7KC for the Wednesday night Educational Radio Net.
Labels:
cermet,
log,
pot,
potentiometer,
rheostat,
taper,
voltage drop
Monday, October 13, 2008
General Test Grab Bag, Bob, Week 21
Tonight we will cover some of the procedural rules that appear in the General Class test.
First is a set of three questions dealing with an unusual situation in the ham bands. That is the situation where amateur radio is secondary to others that also use the band. In other words, the other service or services have priority over the amateur radio service in these bands. The bands are the 30 meter band and the 60 meter band.
The rule is a common sense one and allows the greatest flexibility to amateurs. Quoting from Part 97.303, "A station in a secondary service must not cause harmful interference to, and must accept interference from, stations in a primary service." In practice that means anytime there is interference between you and a primary service, where you are secondary, you must stop immediately, even if you are in the middle of operating and the primary service starts interfering with you. You are free to change to another frequency within the band where you aren't interfering and continue operating.
So on to the questions.
G1A14 (C) [97.303]
Which of the following applies when the FCC rules designate the amateur service as a
secondary user and another service as a primary user on a band?
A. Amateur stations must obtain permission from a primary service station before
operating on a frequency assigned to that station
B. Amateur stations are allowed to use the frequency band only during emergencies
C. Amateur stations are allowed to use the frequency band only if they do not cause
harmful interference to primary users
D. Amateur stations may only operate during specific hours of the day, while primary
users are permitted 24 hour use of the band
~~
G1A15 (D) [97.303]
What must you do if, when operating on either the 30 or 60 meter bands, a station in
the primary service interferes with your contact?
A. Notify the FCC's regional Engineer in Charge of the interference
B. Increase your transmitter's power to overcome the interference
C. Attempt to contact the station and request that it stop the interference
D. Stop transmitting at once and/or move to a clear frequency
~~
G1A16 (A) [97.303(s)]
Which of the following operating restrictions applies to amateur radio stations as a
secondary service in the 60 meter band?
A. They must not cause harmful interference to stations operating in other radio
services
B. They must transmit no more than 30 minutes during each hour to minimize harmful
interference to other radio services
C. They must use lower sideband, suppressed-carrier, only
D. They must not exceed 2.0 kHz of bandwidth
~~
Here is a question that is Emergency Communication related. Once again, the answer is both common sense and allowing the greatest flexibility to the amateur.
G1B04 (A) [97.113(b)]
Which of the following must be true before an amateur station may provide news
information to the media during a disaster?
A. The information must directly relate to the immediate safety of human life or
protection of property and there is no other means of communication available
B. The exchange of such information must be approved by a local emergency
preparedness official and transmitted on officially designated frequencies
C. The FCC must have declared a state of emergency
D. Both amateur stations must be RACES stations
~~
Music and Encryption...don't do it! (With a couple of very interesting exceptions!) The general idea about using codes and really about all communication in amateur radio is that you are not allowed to operate in a way that intentionally obscures the meaning of what you are communicating. If the codes you are using are generally known and so are understood generally then you are okay.
G1B05 (D) [97.113(a)(4),(e)]
When may music be transmitted by an amateur station?
A. At any time, as long as it produces no spurious emissions
B. When it is unintentionally transmitted from the background at the transmitter
C. When it is transmitted on frequencies above 1215 MHz
D. When it is an incidental part of a space shuttle or ISS retransmission
~~
So unless you happen to be in the Space Shuttle or the International Space Station, you don't get to transmit music.
G1B06 (B) [97.113(a)(4) and 97.207(f)]
When is an amateur station permitted to transmit secret codes?
A. During a declared communications emergency
B. To control a space station
C. Only when the information is of a routine, personal nature
D. Only with Special Temporary Authorization from the FCC
~~
Again, unless you happen to be controlling a space station (and how cool would that be!) you don't get to do it.
Here is another question about using codes.
G1B07 (B) [97.113(a)(4)]
What are the restrictions on the use of abbreviations or procedural signals in
the amateur service?
A. Only "Q" codes are permitted
B. They may be used if they do not obscure the meaning of a message
C. They are not permitted because they obscure the meaning of a message to FCC
monitoring stations
D. Only "10-codes" are permitted
~~
Finally here is a catch-all of prohibited activities.
G1B08 (D) [97.113(a)(4), 97.113(e)]
Which of the following is prohibited by the FCC Rules for amateur radio stations?
A. Transmission of music as the primary program material during a contact
B. The use of obscene or indecent words
C. Transmission of false or deceptive messages or signals
D. All of these answers are correct
~~
First is a set of three questions dealing with an unusual situation in the ham bands. That is the situation where amateur radio is secondary to others that also use the band. In other words, the other service or services have priority over the amateur radio service in these bands. The bands are the 30 meter band and the 60 meter band.
The rule is a common sense one and allows the greatest flexibility to amateurs. Quoting from Part 97.303, "A station in a secondary service must not cause harmful interference to, and must accept interference from, stations in a primary service." In practice that means anytime there is interference between you and a primary service, where you are secondary, you must stop immediately, even if you are in the middle of operating and the primary service starts interfering with you. You are free to change to another frequency within the band where you aren't interfering and continue operating.
So on to the questions.
G1A14 (C) [97.303]
Which of the following applies when the FCC rules designate the amateur service as a
secondary user and another service as a primary user on a band?
A. Amateur stations must obtain permission from a primary service station before
operating on a frequency assigned to that station
B. Amateur stations are allowed to use the frequency band only during emergencies
C. Amateur stations are allowed to use the frequency band only if they do not cause
harmful interference to primary users
D. Amateur stations may only operate during specific hours of the day, while primary
users are permitted 24 hour use of the band
~~
G1A15 (D) [97.303]
What must you do if, when operating on either the 30 or 60 meter bands, a station in
the primary service interferes with your contact?
A. Notify the FCC's regional Engineer in Charge of the interference
B. Increase your transmitter's power to overcome the interference
C. Attempt to contact the station and request that it stop the interference
D. Stop transmitting at once and/or move to a clear frequency
~~
G1A16 (A) [97.303(s)]
Which of the following operating restrictions applies to amateur radio stations as a
secondary service in the 60 meter band?
A. They must not cause harmful interference to stations operating in other radio
services
B. They must transmit no more than 30 minutes during each hour to minimize harmful
interference to other radio services
C. They must use lower sideband, suppressed-carrier, only
D. They must not exceed 2.0 kHz of bandwidth
~~
Here is a question that is Emergency Communication related. Once again, the answer is both common sense and allowing the greatest flexibility to the amateur.
G1B04 (A) [97.113(b)]
Which of the following must be true before an amateur station may provide news
information to the media during a disaster?
A. The information must directly relate to the immediate safety of human life or
protection of property and there is no other means of communication available
B. The exchange of such information must be approved by a local emergency
preparedness official and transmitted on officially designated frequencies
C. The FCC must have declared a state of emergency
D. Both amateur stations must be RACES stations
~~
Music and Encryption...don't do it! (With a couple of very interesting exceptions!) The general idea about using codes and really about all communication in amateur radio is that you are not allowed to operate in a way that intentionally obscures the meaning of what you are communicating. If the codes you are using are generally known and so are understood generally then you are okay.
G1B05 (D) [97.113(a)(4),(e)]
When may music be transmitted by an amateur station?
A. At any time, as long as it produces no spurious emissions
B. When it is unintentionally transmitted from the background at the transmitter
C. When it is transmitted on frequencies above 1215 MHz
D. When it is an incidental part of a space shuttle or ISS retransmission
~~
So unless you happen to be in the Space Shuttle or the International Space Station, you don't get to transmit music.
G1B06 (B) [97.113(a)(4) and 97.207(f)]
When is an amateur station permitted to transmit secret codes?
A. During a declared communications emergency
B. To control a space station
C. Only when the information is of a routine, personal nature
D. Only with Special Temporary Authorization from the FCC
~~
Again, unless you happen to be controlling a space station (and how cool would that be!) you don't get to do it.
Here is another question about using codes.
G1B07 (B) [97.113(a)(4)]
What are the restrictions on the use of abbreviations or procedural signals in
the amateur service?
A. Only "Q" codes are permitted
B. They may be used if they do not obscure the meaning of a message
C. They are not permitted because they obscure the meaning of a message to FCC
monitoring stations
D. Only "10-codes" are permitted
~~
Finally here is a catch-all of prohibited activities.
G1B08 (D) [97.113(a)(4), 97.113(e)]
Which of the following is prohibited by the FCC Rules for amateur radio stations?
A. Transmission of music as the primary program material during a contact
B. The use of obscene or indecent words
C. Transmission of false or deceptive messages or signals
D. All of these answers are correct
~~
Monday, October 6, 2008
HIGH FREQUENCY PROPAGATION, Jim Hadlock K7WA, week 20
HIGH FREQUENCY PROPAGATION
October 8, 2008 – Educational Radio Net, Session 20
Jim Hadlock K7WA
One of the things that got me interested in radio was hearing stations from far away places. At first I used my clock-radio in the AM broadcast band. I discovered that at night I could hear stations from Los Angeles, Salt Lake City, and even Mexico! Later I built a Knight Kit shortwave radio and began listening to broadcasts from South America, Russia and Japan. As a ham I've enjoyed the DX (long distance communication) aspect of our hobby for nearly fifty years, but the idea of a small radio signal propagating to and from far away places still intrigues me. Last year when I was in the Caribbean I made a 2-way contact with Paul, NG7Z, in Bothell on 40 meter CW – we were both running 5 watts of power. That, to me, is an example of the miracle of radio propagation – a very small signal covering a great distance.
The subject for tonight is High Frequency Propagation. We will discuss some of the factors that determine how a radio signal travels to far away places and resources for analyzing and predicting propagation conditions. If you have spent much time listening or operating in the high frequency bands between 160 meters and 10 meters you know that propagation is highly variable. How far you can communicate depends on many factors.
Lets begin with frequency. As I discovered with my clock-radio, far away signals on the AM broadcast band come in better at night. This characteristic applies to signals in the 160 meter, 80 meter, and 40 meter amateur bands as well. During daylight these bands may provide local coverage, but at night they can support world-wide communications. The higher amateur bands, 20 meters, 15 meters, and 10 meters are usually open during the daytime and quiet at night. These daily effects are due to the sun's radiation ionizing atoms and molecules in the earth's upper atmosphere, and the different layers of ionized material either absorbing, bending, or passing through radio signals of different frequencies. During daylight ionization in what's called the D-Layer, about 50 miles high, tends to absorb radio signals. This absorption is greater at low frequencies than high frequencies. If the signals pass through the D-Layer they can be refracted, or bent back to earth, by ionization in the F-Layer, 150 to 300 miles high. If the frequency is too high to be bent the radio signal will pass through the F-Layer and continue out into space. A few weeks ago Bob, K9PQ, discussed the terms Maximum Useable Frequency (MUF) and Lowest Useable Frequency (LUF) which are defined by this absorption/refraction process. After the sun sets the D-Layer starts breaking down due to the absence of solar radiation and propagation improves on the lower frequency bands. Although the F-Layer also breaks down in the absence of solar radiation, it often supports some propagation through the night. At sunrise solar radiation begins to build the D-Layer and F-Layer again.
In addition to the daily propagation cycle, seasonal effects vary greatly. Spring and Fall are similar, but Winter and Summer are very different. Due to the tilt of the earth's axis, radiation from the sun is weaker in the winter and stronger in the summer. The long winter nights make for very good low band propagation, while during the short summer nights the higher bands may remain open 24 hours a day.
The solar radiation which ionizes atoms and molecules in the earth's atmosphere is not constant. One of the best indications of strong radio propagation is the presence of sunspots on the surface of the sun. Sunspots are areas on the sun associated with ultraviolet radiation which ionizes the upper atmosphere. Sunspots can appear and disappear quickly or remain for several solar rotations (the sun rotates on its axis every 27.5 earth-days). Sunspots have been observed since Galileo invented the telescope in the early seventeenth century. Sunspots have been counted and recorded as long as they have been observed with records going all the way back to 1610. Currently there are two official sunspot numbers in common use, the daily "Boulder Sunspot Number," computed by the NOAA Space Environment Center, and the "International Sunspot Number" recorded in Europe. Both numbers use a method devised by Rudolph Wolf in 1848, which combines a count of groups of sunspots and a count of individual sunspots:
R=k(10g+s), where "R" is the sunspot number, "g" is the number of sunspot groups, "s" is the total number of individual sunspots in all the groups, and "k" is a variable scaling factor (usually <1) href="http://www.spaceweather.com/glossary/sunspotnumber.html">www.spaceweather.com/glossary/sunspotnumber.html
Understanding Solar Indices – www.arrl.org/tis/info/pdf/0209038.pdf
The Sun, the Earth, the Ionosphere: What the Numbers Mean and Propagation
Predictions – www.arrl.org/tis/info/k9la-prop.html
W1AW Propagation Bulletin – www.arrl.org/w1aw/prop
WWV - www.swpc.noaa.gov/ftpdir/latest/wwv.txt
NCDXF Beacons – www.ncdxf.org/Beacon/BeaconSchedule.html
Contributed by Wr5J, Curt Black
Big Bear Solar Observatory - http://www.bbso.njit.edu/
October 8, 2008 – Educational Radio Net, Session 20
Jim Hadlock K7WA
One of the things that got me interested in radio was hearing stations from far away places. At first I used my clock-radio in the AM broadcast band. I discovered that at night I could hear stations from Los Angeles, Salt Lake City, and even Mexico! Later I built a Knight Kit shortwave radio and began listening to broadcasts from South America, Russia and Japan. As a ham I've enjoyed the DX (long distance communication) aspect of our hobby for nearly fifty years, but the idea of a small radio signal propagating to and from far away places still intrigues me. Last year when I was in the Caribbean I made a 2-way contact with Paul, NG7Z, in Bothell on 40 meter CW – we were both running 5 watts of power. That, to me, is an example of the miracle of radio propagation – a very small signal covering a great distance.
The subject for tonight is High Frequency Propagation. We will discuss some of the factors that determine how a radio signal travels to far away places and resources for analyzing and predicting propagation conditions. If you have spent much time listening or operating in the high frequency bands between 160 meters and 10 meters you know that propagation is highly variable. How far you can communicate depends on many factors.
Lets begin with frequency. As I discovered with my clock-radio, far away signals on the AM broadcast band come in better at night. This characteristic applies to signals in the 160 meter, 80 meter, and 40 meter amateur bands as well. During daylight these bands may provide local coverage, but at night they can support world-wide communications. The higher amateur bands, 20 meters, 15 meters, and 10 meters are usually open during the daytime and quiet at night. These daily effects are due to the sun's radiation ionizing atoms and molecules in the earth's upper atmosphere, and the different layers of ionized material either absorbing, bending, or passing through radio signals of different frequencies. During daylight ionization in what's called the D-Layer, about 50 miles high, tends to absorb radio signals. This absorption is greater at low frequencies than high frequencies. If the signals pass through the D-Layer they can be refracted, or bent back to earth, by ionization in the F-Layer, 150 to 300 miles high. If the frequency is too high to be bent the radio signal will pass through the F-Layer and continue out into space. A few weeks ago Bob, K9PQ, discussed the terms Maximum Useable Frequency (MUF) and Lowest Useable Frequency (LUF) which are defined by this absorption/refraction process. After the sun sets the D-Layer starts breaking down due to the absence of solar radiation and propagation improves on the lower frequency bands. Although the F-Layer also breaks down in the absence of solar radiation, it often supports some propagation through the night. At sunrise solar radiation begins to build the D-Layer and F-Layer again.
In addition to the daily propagation cycle, seasonal effects vary greatly. Spring and Fall are similar, but Winter and Summer are very different. Due to the tilt of the earth's axis, radiation from the sun is weaker in the winter and stronger in the summer. The long winter nights make for very good low band propagation, while during the short summer nights the higher bands may remain open 24 hours a day.
The solar radiation which ionizes atoms and molecules in the earth's atmosphere is not constant. One of the best indications of strong radio propagation is the presence of sunspots on the surface of the sun. Sunspots are areas on the sun associated with ultraviolet radiation which ionizes the upper atmosphere. Sunspots can appear and disappear quickly or remain for several solar rotations (the sun rotates on its axis every 27.5 earth-days). Sunspots have been observed since Galileo invented the telescope in the early seventeenth century. Sunspots have been counted and recorded as long as they have been observed with records going all the way back to 1610. Currently there are two official sunspot numbers in common use, the daily "Boulder Sunspot Number," computed by the NOAA Space Environment Center, and the "International Sunspot Number" recorded in Europe. Both numbers use a method devised by Rudolph Wolf in 1848, which combines a count of groups of sunspots and a count of individual sunspots:
R=k(10g+s), where "R" is the sunspot number, "g" is the number of sunspot groups, "s" is the total number of individual sunspots in all the groups, and "k" is a variable scaling factor (usually <1) href="http://www.spaceweather.com/glossary/sunspotnumber.html">www.spaceweather.com/glossary/sunspotnumber.html
Understanding Solar Indices – www.arrl.org/tis/info/pdf/0209038.pdf
The Sun, the Earth, the Ionosphere: What the Numbers Mean and Propagation
Predictions – www.arrl.org/tis/info/k9la-prop.html
W1AW Propagation Bulletin – www.arrl.org/w1aw/prop
WWV - www.swpc.noaa.gov/ftpdir/latest/wwv.txt
NCDXF Beacons – www.ncdxf.org/Beacon/BeaconSchedule.html
Contributed by Wr5J, Curt Black
Big Bear Solar Observatory - http://www.bbso.njit.edu/
Wednesday, October 1, 2008
EMERGENCY COMMUNICATIONS SKILLS - Bob, week 19
Tonight's edition will cover some important communication skills for Emergency Communication (EmComm) situations. This lesson borrows heavily from The ARRL Emergency Communication Handbook. Although there is a lot of overlap between this book and the three ARRL Emergency Communication course books, I still think it is worth having as a ready reference. I would even recommend bringing it along during a real emergency. While you should avoid trying to use it while you are actually operating, you will likely have some down time when you could refresh your memory and incorporate the good practices you find there. This lesson will use information and tips found in Chapter 5, Basic Communication Skills.
Listening Skills
The need for this is obvious but let's break it down a bit.
Be Boring
This is not the time for creativity in your use of language. Simplicity, clarity and predictability in your communications are very good things when you will potentially be describing things that are well outside the ordinary. A few other tips:
Pro-words
There is one exception to the "use plain language" rule and that is the use of pro-words also called pro-signs. These are promoted by the ARRL and, I believe, in EmComm generally. So far I have primarily been discussing voice communications but there are pro-signs for both Voice and for Morse/Digital communications. The Morse/Digital code is in parentheses. They are as follows:
Tactical and FCC Call Signs
It is easy to get confused about the use of Tactical Call Signs and the use of your (or your station's) FCC Call Sign. Hopefully this will make this clear.
One of the first things you learn when learning about ham radio is that you must use your FCC Call Sign once every 10 minutes and at the end of your last transmission. This requirement still holds during an emergency! The only time you wouldn't use your own FCC call sign is when you using the FCC Call Sign of a club station (e.g. W7ACS) or you are not the control operator and you are using the call sign of the control operator of the station you are on. That second one is not likely to ever occur. The ARRL manual has a very good rule of thumb to accomplish this without going overboard. Since nearly all tactical exchanges are less than 10 minutes in length, give your FCC Call Sign at the end of each full exchange. This doubles as a signal to the other station and anyone else listening that you are finished with that exchange of transmissions.
The use of Tactical Call Signs are purely for better communication and have nothing to do with the FCC requirement to use your FCC Call Sign. Tactical call signs are used to identify the particular station either by location or function, etc. This allows the use of more than one operator at a station or the switching of operators without confusion about which station is sending or is intended to receive the transmission. Here are some examples of Tactical Call Signs:
So for an exchange it is good practice to use only the tactical call sign for the entire exchange and finish with both call signs together to signal the end of the transmission, for example, "Seattle EOC, K9PQ". This keeps everything legal but allows for clearer transmissions during the exchange.
Listening Skills
The need for this is obvious but let's break it down a bit.
- Train yourself to understand what is being said under difficult conditions; such as when you have a poor connection with lots of static and some drop outs, or when you are in a noisy environment, or simply in an environment with another conversation. Sometimes a quiet room with just one other conversation going on is more distracting than being in the middle of a crowd. And you can't always ask the people having the conversation to keep it down or move it elsewhere. That other conversation may well be another radio operator performing an equally critical function.
As with any training, the best way to train for it is to do it in realistic environments. For ACS members, our field exercises like Field Day or the SET are good opportunities. Another idea that I believe I got from Brian, WB7OML, is to have two radios on tuned to different talk stations and try to pick out one. - Use headphones if possible to reduce the level of the noise or conversations around you. The Seattle EOC is fitted out with Fire Engine headsets and they work great. Airplane headsets would be another good choice. I don't have my own yet but they are on my wish list.
- Be sure to leave enough of a break between the end of the received transmission and the start of your transmission to allow for breaking stations. This is courteous practice all the time but is critical during emergencies.
Be Boring
This is not the time for creativity in your use of language. Simplicity, clarity and predictability in your communications are very good things when you will potentially be describing things that are well outside the ordinary. A few other tips:
- Be brief. Keep your transmissions short and to the point. This is a bit of a judgment call since you want to make sure your transmission completely conveys the information you want it to; but try to avoid a lot of unnecessary extra descriptions or irrelevant facts.
- Use plain language, avoiding Q signals, 10-codes and other jargon. One exception to this is the use of pro-words described below.
- Spell unusual words, abbreviations or names with phonetics. The ARRL standard is to say the word, say "I spell" and then spell the word phonetically. In keeping with the "boring" theme here, it is best to use the standard phonetics rather than some other one, even if it is in common use on the HF bands. A lot of people that will be on the air will not have spent any time there and may not understand what to you is a common alternate phonetic.
- Avoid contractions. This is one I hadn't thought of but is a very good tip to avoid confusion. I use contractions all the time and will have to concentrate to avoid them.
- Avoid thinking on the air. If you need to collect your thoughts and still need to continue your transmission say "Stand By" then un-key, decide what you are going to say then key up again and say it.
Pro-words
There is one exception to the "use plain language" rule and that is the use of pro-words also called pro-signs. These are promoted by the ARRL and, I believe, in EmComm generally. So far I have primarily been discussing voice communications but there are pro-signs for both Voice and for Morse/Digital communications. The Morse/Digital code is in parentheses. They are as follows:
- Roger (R) - message received completely and correctly
- Over (KN) - indicates that the specific station that is being communicated with should respond
- Go Ahead (K) - indicates that any stations may respond
[Bob's Note: I don't know how common it is to differentiate between Over and Go Ahead. I seem to hear them interchangeably and I wouldn't count on the strict difference in voice communications.] - Clear (SK) - completed transmissions and releasing frequency. Usually this indicates that you are still listening on the frequency but it is common and, in my opinion good practice to say "Clear and Listening" to remove any doubt.
- Out (CL) - completed transmission and leaving the air. Will not be listening.
- Stand By (AS) - The ARRL manual only calls this a temporary interruption of the contact. In my experience it has a different and much more useful meaning. It is explicitly telling the other station to refrain from transmitting and wait for you to transmit again. Another very useful variation on this is "All Stations Stand By" which would normally only be used by the Net Control Station if there was one.
Tactical and FCC Call Signs
It is easy to get confused about the use of Tactical Call Signs and the use of your (or your station's) FCC Call Sign. Hopefully this will make this clear.
One of the first things you learn when learning about ham radio is that you must use your FCC Call Sign once every 10 minutes and at the end of your last transmission. This requirement still holds during an emergency! The only time you wouldn't use your own FCC call sign is when you using the FCC Call Sign of a club station (e.g. W7ACS) or you are not the control operator and you are using the call sign of the control operator of the station you are on. That second one is not likely to ever occur. The ARRL manual has a very good rule of thumb to accomplish this without going overboard. Since nearly all tactical exchanges are less than 10 minutes in length, give your FCC Call Sign at the end of each full exchange. This doubles as a signal to the other station and anyone else listening that you are finished with that exchange of transmissions.
The use of Tactical Call Signs are purely for better communication and have nothing to do with the FCC requirement to use your FCC Call Sign. Tactical call signs are used to identify the particular station either by location or function, etc. This allows the use of more than one operator at a station or the switching of operators without confusion about which station is sending or is intended to receive the transmission. Here are some examples of Tactical Call Signs:
- Net Control
- Seattle EOC
- South Seattle Community College
- Green Lake Community Center
- Seattle City Light
So for an exchange it is good practice to use only the tactical call sign for the entire exchange and finish with both call signs together to signal the end of the transmission, for example, "Seattle EOC, K9PQ". This keeps everything legal but allows for clearer transmissions during the exchange.
Tuesday, September 23, 2008
IMPEDANCE SERIES PART 13, (final segment) Lee week 18
September 24, 2008 Educational Radio Net, PSRG 18th session
This session is the 13th in the impedance series. Given that impedance is the combination of reactance and resistance and, further, that reactance is an alternating current phenomenon it is clear that we must have some elemental definitions under our belts to fully appreciate the subject. This multi-part narrative series has been an attempt to elevate participants to an intuitive level of electrical understanding without using any serious mathematics as well as provide some review for those of us who have not spent a lot of time on fundamentals lately.
Thus far we have talked about electrical current, voltage, resistance, Ohm's Law, power, DC or direct current, AC or alternating current, Joule's law, Kirchoff's 2 circuit laws, capacitance and capacitive reactance including the impedance of a resistor-capacitor combination. This 13th part of the series will look at inductance, inductive reactance, and end with the impedance of a resistor-inductor combination. This is the ending session of this particular series. All discussion material will be reviewed continually and be available on the blog.
In future sessions I will discuss series resonance which is a natural progression of the subject material thus far. Then parallel resonance followed by amplifiers then oscillators. Stay tuned.
Let's review what has been covered up to this point in the series.
Part 1 developed the idea of electrical current consisting of moving charge and defined the ampere as 1 coulomb of charge moving past a fixed point in 1 second. One coulomb was defined as a collection of charge numbering 6.24 x 10^18 electrons.
Part 2 developed the notion of mechanical "work" and considered objects at different "potential" levels in a gravitational field. The concept of "voltage", also known as electrical potential difference, and the relationship of voltage to current follows closely with the idea of a mechanical weight being moved between different levels. In both cases work is being done and energy is being manipulated in various ways.
Part 3 capitalized on Bob's lightning series to review electrical current in the context of a charged cloud redistributing charge in the form of lightning where modest amounts of charge make a large impression if moved rapidly.
Part 4 developed the notion of potential difference and ended with a definition of voltage. If you move 1 coulomb of charge from point A to point B in an electric field such that 1 joule of work is done then the potential difference between points A and B is defined as 1 volt. Another way to state this is that 1 joule of energy is required to push 1 coulomb through a potential difference of 1 volt.
Part 5 developed the notion of power by using a mechanical analogy. Power is the relationship between energy and time. Specifically power is the change in energy as in work done divided by the change in time to do the work. Conversely, energy is power multiplied by time.
Part 6 developed the notion of resistance by using a simple circuit to compare how well various materials conduct electrical current. We looked at a simple series circuit with fixed voltage, one D cell battery, a fuse, an ammeter, a switch, and a pair of DUT terminals as in Device Under Test. Substituting various materials across the DUT terminals yielded different measurements on the ammeter and we ranked these materials based upon their "conductance". Finally, we learned that resistance and conductance are reciprocals and that high conductance equals low resistance and vice versa.
Part 7 developed the notion of Ohm's Law by using a simple series circuit to illustrate the relationship of voltage, current, and resistance. Ohm's Law states that electrical current through a resistive device is directly proportional to the voltage across the device so, for example, doubling the voltage across the device will double the current through the device. This relationship stated in math terms is I (which is the symbol for current) equals E (the symbol for voltage) divided by R (the symbol for resistance).
Part 8 developed the notion of direct current and alternating current by using a sand filled tube with a scribed fiducial mark. By assuming that the sand particles represented electrons we could watch the action at the mark and deduce if the current, or moving electrons, was AC or DC.
Part 9 contrasted direct current and sinusoidal alternating current by measuring the temperature of a resistor when subjected to the same maximum voltage from each waveform. The conclusion was that equal values of DC voltage and AC rms voltage, if impressed across a resistor in turn, will produce the same heating effect, or work, in that resistor hence are equivalent. Heat produced as a consequence of current through a resistance is called Joule heating. Energy losses such as this are sometimes called Johnson losses as well.
Part 10 reviewed Ohm's law and restated the concepts from part 9 in a manner called Joule's law wherein energy is associated with time to define power and a variable substitution from Ohm's law produces the familiar P = (E^2)/R formulation. Additionally, the very important Kirchoff's voltage and current laws were introduced.
Part 11 introduced the concept of capacitance. The relationship of charge denoted by symbol q, capacitance denoted by symbol C, and voltage denoted by symbol V is simply q=CV. The unit of capacitance is the farad which is defined as 1 coulomb per volt. Given that one farad is a very large unit we normally express capacitance by micro-farads, nano-farads, or pico-farads.
Part 12 introduced the concept of capacitive reactance by combining circuit resistance with capacitive reactance to form impedance which represents the total opposition to electrical current flow and which is denoted by the symbol Z. We found that capacitive reactance is not present in purely DC circuits with unchanging currents rather it is an AC phenomenon and that the magnitude of reactance is inversely related to the AC frequency and considered to be negative in the sense that it is plotted in quadrant 4 of the typical x/y presentation. Additionally we found that the magnitude of impedance is graphically shown by the length of the hypotenuse of a right triangle when resistance represents the base and reactance represents the height of this triangle. While current and voltage are perfectly "in phase"... meaning "in time"... with one another in a resistor we found that current and voltage are in quadrature or 1/4 cycle out of phase in the perfect capacitor and that the resulting overall circuit phase angle is a combination of the two. Since voltage leads the current by the phase angle we found that capacitors always produce a leading angle. We found that the resistor gets warm whereas the perfect capacitor does not indicating that energy is transformed to heat in the resistor but stored in the capacitor electric field to be returned to the circuit in the following cycle.
Part 13, tonight's final edition, will introduce the concept of inductance with symbol L, inductive reactance with symbol XsubL, and the impedance of a resistor-inductor pair which is denoted by the symbol Z. This is a long segment with challenging ideas so just close your eyes and listen carefully to maximize the experience. Then go to the blog tomorrow and read it again. All series parts are available on the blog for review at anytime.
Ok, let's continue with the very sophisticated idea of electrical inductance and associated phenomenon. I lean heavily upon information contained in my favorite tome "Physics for students of Science and Engineering" by Halliday and Resnick. Mathematically this book may be a challenge for many so I have plucked the essential ideas and attempted to construct a narrative which is easy to follow.
First a bit of history is in order. Prior to the 1800's electrical phenomenon was presented as a parlor room trick. Electrical "magicians" if you please would sport their Wimshurst machines, Leyden jars, glass rods, rubber rods, silk cloths, pith balls, and fur to the delight of any audience. The truth was that these slight of hand operators had no idea of what was really going on and probably didn't care. There were, however, dedicated science types who were investigating such matters and who made major advances in the understanding of the subject. Three of these men were Faraday, Henry, and Lenz.
Faraday is credited with the Law on Induction but Henry was hot on his heels. In the end Henry is the better known since the unit of inductance, L, is in fact his name. In counter point the unit of capacitance, C, is the farad after Faraday but few of us would likely make that connection.
Lenz and Faraday were in close competition to determine the direction of induced emf's but Lenz formulated the more succinct explanation or "law" hence it bears his name.
These are only a few of the many who replaced the parlor "wow" with a quantitative understanding of just what was going on. Those early days must have been exciting times. For a good read check out JJ Thomson's electronic charge to mass experiment and Millikan's oil drop experiment. Such cleverness and simplicity, the results of which affect our lives today.
If you are interested in early electrical apparatus and all things pertaining to radio the you must visit the world class radio museum in Bellingham, WA. Check this link to see just a portion of the equipment housed there.
www.californiahistoricalradio.com/photos43.html
Let's start this inductance saga with a description of two pieces of apparatus used by Faraday to ascertain his findings.
First we need to talk about the galvanometer. This is an indicating device which responds to electrical current which we know as moving charge. In its simplest form it can be a magnetic compass in close association with a length of wire. This is a qualitative rather than a quantitative gadget. We just want it to tell us when charge is moving rather than how much charge is moving. There is an aspect of moving charge which I have bypassed to this point since it did not figure into the earlier series parts but it is essential to our discussion now so here goes. Stationary charge only has an electric field in contrast to moving charge which has both a magnetic and electric field. We know that magnetic fields can attract and repel depending on circumstances and which is the basis of the electric motor. Now imagine a wire alongside of a magnetic compass. If, by any means, we cause an electrical current to flow in the wire then a magnetic field will form around the wire and interact with the compass (and the Earth's magnetic field) causing the compass to deflect. To a degree this is quantitative since larger currents will cause larger deflections.
Now, the first of Faraday's circuits is a simple loop of wire connected to a galvanometer which shows no deflection hence no moving charge, or current, in the circuit. Now fetch a bar magnet and poke the north end through the loop of wire in a direction normal to the plane of the loop. The term "normal" signifies that all angles around the bar magnet are 90 degrees to the plane of the loop of wire or coil. As the bar magnet moves relative to the coil the galvanometer will deflect to one side. When the magnet stops relative to the coil then the deflection ceases. Withdraw the magnet and the galvanometer deflects in the opposite direction. The point here is that no relative motion equates with no deflection.
The second of Faraday's circuits is slightly more complicated. The first circuit as described above is augmented with a second circuit consisting of a loop of wire in series with a battery, resistor, and simple on/off switch. Arrange the two, distinct, circuits such that the wire loops are close, parallel, and coaxial. There is no physical, or electrical, connection between these two circuit coils. Closing the switch will initiate current flow in the second loop from zero to that current determined by the battery and resistor per Ohm's Law. This apparent step change of current is, in reality, a steep ramp from zero to maximum. An observer watching the galvanometer associated with the first loop will notice a deflection when the switch initiates the steep current ramp. When the current becomes steady the galvanometer shows no deflection. In like manner an observer will notice a deflection when the switch is opened and the current returns to zero. The point here is that steady, unchanging, current causes no galvanometer deflection.
From the perspective of circuit 1, the galvanometer and loop, it is impossible to tell if the galvanometer deflection is from a moving bar magnet or from a transient in the closely associated loop of circuit 2.
Faraday deduced from these two circuits that the common connection was changing magnetic flux which we call phi from the Greek alphabet. So, Faraday's Law of Magnetic Induction is given by emf = - delta phi divided by delta time. In other words the generated emf is minus 1 times the changing magnetic flux divided by the time. It is not the magnitude of the current producing the flux that is important rather how fast the current hence the flux changes. Magnetic induction is the basis for rotating alternators as in the generating equipment at Grand Coulee.
Now, suppose that you have a coil of two loops instead of 1 loop. If the loops are tightly packed such that both see the same changing flux then the induced emf will be twice that of 1 loop. In fact this can be generalized to N loops where N can be any number and the Faraday Law of Induction becomes emf = - delta (N times phi) divided by delta time. The expression N, or number of loops, times phi, or flux, is known as the number of flux linkages.
The final historical note deals with Lenz's Law which states: The induced current will appear in such a direction that it opposes the change that produced it. So, in circuit 1 when you push a north pole into the loop the loop produces a north pole which opposes the motion. If you withdraw a north pole from the loop then the loop produces a south pole which opposes the withdrawal. Note that this is not true for open circuits... current must be flowing to observe this behavior.
So, what is inductance? It turns out that the number of flux linkages given above is actually equal to L times I where L is a constant of proportionality called inductance. Plugging this new information back into Faraday's Law yields the familiar emf = -L times the rate of current change. Induced voltage then depends on two things... the value of inductance and how fast the current through the inductance changes. The unit of inductance is the volt-sec per ampere. So 1 Henry = 1 volt-sec per ampere. One Henry is a large unit so more common and useful measures are millihenry and microhenry.
In summary then... the effect of inductance is to stubbornly resist any change in circuit current. This can only happen provided that enough energy is stored in the magnetic field to manage the situation. One good example of this effect is the automobile ignition coil. When the breaker points are closed then a large current flows in the primary coil circuit. When the points open the stored energy in the magnetic field makes a valiant attempt to keep the current flowing by collapsing very rapidly. The very large number of secondary turns experience a rapid flux change with the result being a very high induced voltage at the spark plug. Be aware that this is a bit over simplistic since the capacitor across the points normally thought to only "protect" the points actually resonates with the primary to produce a much "fatter", hotter, and pink spark discharge. This is a subject for another time.
So far the discussion has been limited to transient changes where a bar magnet is momentarily pushed or a switch has been closed and the circuit goes from one steady state to another. The general case to consider is that of constantly changing current as produced by AC circuits driven by a sinusoidal source. We previously considered a capacitor in series with a resistor and how the capacitive reactance behaved in concert with a resistor. The same sort of behavior occurs when an inductor and resistor are in series and driven by an AC source. Whereas the capacitor stores energy in the form of an electric field, the inductor stores energy in the magnetic field. Whereas the reactance associated with the capacitor is inversely related to the driving frequency, the reactance associated with the inductor is directly related to the driving frequency. Whereas the net circuit phase angle for the capacitor is leading, the net phase angle for the inductor is lagging. The same right triangle geometry is used to calculate impedance in both cases... vertical axis representing reactance whether capacitive or inductive and horizontal axis representing resistance in both cases. The hypotenuse represents the impedance. Just be aware that inductive reactance is plotted in quadrant 1 and capacitive reactance is plotted in quadrant 4 in x/y space.
Given that inductive reactance increases with frequency and capacitive reactance decreases with frequency there is the possibility that at some frequency they may be equal in magnitude. I have not stressed the point that vectors are involved here so let me assert that XsubL or inductive reactance is represented by a vertical vector pointing up and that XsubC or capacitive reactance is represented by a vector pointing down. If the reactance values are equal in magnitude and opposite in sign then the sum is zero. At this special frequency the circuit is said to be resonant and the net reactance goes to zero and the impedance is purely resistive. This is an example of series resonance where the capacitor and inductor are in series and the lowest circuit impedance occurs at resonance. Capacitors and inductors can also be connected in parallel fashion. Such an arrangement is frequently called a "tank" especially if associated with the plate of a vacuum tube. If you hear the expression "plate tank" then you will know that it is a parallel combination of capacitance and inductance. If the Q is high enough... analogous to losses are low... then the parallel tank operates mathematically much the same as the series except that the circuit impedance is highest at resonance. Hence "dipping" the plate current by tuning the "tank" really boils down to maximizing the circuit impedance at a given frequency which will minimize the plate current. Again, a subject for another time.
This concludes the set up for the discussion of reactance associated with inductance and marks the end of this impedance series. Are there any questions or comments?
Terminology
Resonance, series: The special frequency where net reactance is zero, circuit impedance is resistive, and minimum.
Transient: A momentary perturbation of normally steady state conditions.
Radian: The angle formed when the length of circle circumference is equal to circle radius.
Angular frequency: Radians per second given by 2pi times frequency in cycles per second. Hence 1 cycle per second is equal to 2pi radians per second.
The last challenge question answer:
If you have equal values of resistance and reactance what is the overall circuit phase angle? 45 degrees since the geometric figure is a square.
This is N7KC for the Wednesday night Educational Radio Net.
This session is the 13th in the impedance series. Given that impedance is the combination of reactance and resistance and, further, that reactance is an alternating current phenomenon it is clear that we must have some elemental definitions under our belts to fully appreciate the subject. This multi-part narrative series has been an attempt to elevate participants to an intuitive level of electrical understanding without using any serious mathematics as well as provide some review for those of us who have not spent a lot of time on fundamentals lately.
Thus far we have talked about electrical current, voltage, resistance, Ohm's Law, power, DC or direct current, AC or alternating current, Joule's law, Kirchoff's 2 circuit laws, capacitance and capacitive reactance including the impedance of a resistor-capacitor combination. This 13th part of the series will look at inductance, inductive reactance, and end with the impedance of a resistor-inductor combination. This is the ending session of this particular series. All discussion material will be reviewed continually and be available on the blog.
In future sessions I will discuss series resonance which is a natural progression of the subject material thus far. Then parallel resonance followed by amplifiers then oscillators. Stay tuned.
Let's review what has been covered up to this point in the series.
Part 1 developed the idea of electrical current consisting of moving charge and defined the ampere as 1 coulomb of charge moving past a fixed point in 1 second. One coulomb was defined as a collection of charge numbering 6.24 x 10^18 electrons.
Part 2 developed the notion of mechanical "work" and considered objects at different "potential" levels in a gravitational field. The concept of "voltage", also known as electrical potential difference, and the relationship of voltage to current follows closely with the idea of a mechanical weight being moved between different levels. In both cases work is being done and energy is being manipulated in various ways.
Part 3 capitalized on Bob's lightning series to review electrical current in the context of a charged cloud redistributing charge in the form of lightning where modest amounts of charge make a large impression if moved rapidly.
Part 4 developed the notion of potential difference and ended with a definition of voltage. If you move 1 coulomb of charge from point A to point B in an electric field such that 1 joule of work is done then the potential difference between points A and B is defined as 1 volt. Another way to state this is that 1 joule of energy is required to push 1 coulomb through a potential difference of 1 volt.
Part 5 developed the notion of power by using a mechanical analogy. Power is the relationship between energy and time. Specifically power is the change in energy as in work done divided by the change in time to do the work. Conversely, energy is power multiplied by time.
Part 6 developed the notion of resistance by using a simple circuit to compare how well various materials conduct electrical current. We looked at a simple series circuit with fixed voltage, one D cell battery, a fuse, an ammeter, a switch, and a pair of DUT terminals as in Device Under Test. Substituting various materials across the DUT terminals yielded different measurements on the ammeter and we ranked these materials based upon their "conductance". Finally, we learned that resistance and conductance are reciprocals and that high conductance equals low resistance and vice versa.
Part 7 developed the notion of Ohm's Law by using a simple series circuit to illustrate the relationship of voltage, current, and resistance. Ohm's Law states that electrical current through a resistive device is directly proportional to the voltage across the device so, for example, doubling the voltage across the device will double the current through the device. This relationship stated in math terms is I (which is the symbol for current) equals E (the symbol for voltage) divided by R (the symbol for resistance).
Part 8 developed the notion of direct current and alternating current by using a sand filled tube with a scribed fiducial mark. By assuming that the sand particles represented electrons we could watch the action at the mark and deduce if the current, or moving electrons, was AC or DC.
Part 9 contrasted direct current and sinusoidal alternating current by measuring the temperature of a resistor when subjected to the same maximum voltage from each waveform. The conclusion was that equal values of DC voltage and AC rms voltage, if impressed across a resistor in turn, will produce the same heating effect, or work, in that resistor hence are equivalent. Heat produced as a consequence of current through a resistance is called Joule heating. Energy losses such as this are sometimes called Johnson losses as well.
Part 10 reviewed Ohm's law and restated the concepts from part 9 in a manner called Joule's law wherein energy is associated with time to define power and a variable substitution from Ohm's law produces the familiar P = (E^2)/R formulation. Additionally, the very important Kirchoff's voltage and current laws were introduced.
Part 11 introduced the concept of capacitance. The relationship of charge denoted by symbol q, capacitance denoted by symbol C, and voltage denoted by symbol V is simply q=CV. The unit of capacitance is the farad which is defined as 1 coulomb per volt. Given that one farad is a very large unit we normally express capacitance by micro-farads, nano-farads, or pico-farads.
Part 12 introduced the concept of capacitive reactance by combining circuit resistance with capacitive reactance to form impedance which represents the total opposition to electrical current flow and which is denoted by the symbol Z. We found that capacitive reactance is not present in purely DC circuits with unchanging currents rather it is an AC phenomenon and that the magnitude of reactance is inversely related to the AC frequency and considered to be negative in the sense that it is plotted in quadrant 4 of the typical x/y presentation. Additionally we found that the magnitude of impedance is graphically shown by the length of the hypotenuse of a right triangle when resistance represents the base and reactance represents the height of this triangle. While current and voltage are perfectly "in phase"... meaning "in time"... with one another in a resistor we found that current and voltage are in quadrature or 1/4 cycle out of phase in the perfect capacitor and that the resulting overall circuit phase angle is a combination of the two. Since voltage leads the current by the phase angle we found that capacitors always produce a leading angle. We found that the resistor gets warm whereas the perfect capacitor does not indicating that energy is transformed to heat in the resistor but stored in the capacitor electric field to be returned to the circuit in the following cycle.
Part 13, tonight's final edition, will introduce the concept of inductance with symbol L, inductive reactance with symbol XsubL, and the impedance of a resistor-inductor pair which is denoted by the symbol Z. This is a long segment with challenging ideas so just close your eyes and listen carefully to maximize the experience. Then go to the blog tomorrow and read it again. All series parts are available on the blog for review at anytime.
Ok, let's continue with the very sophisticated idea of electrical inductance and associated phenomenon. I lean heavily upon information contained in my favorite tome "Physics for students of Science and Engineering" by Halliday and Resnick. Mathematically this book may be a challenge for many so I have plucked the essential ideas and attempted to construct a narrative which is easy to follow.
First a bit of history is in order. Prior to the 1800's electrical phenomenon was presented as a parlor room trick. Electrical "magicians" if you please would sport their Wimshurst machines, Leyden jars, glass rods, rubber rods, silk cloths, pith balls, and fur to the delight of any audience. The truth was that these slight of hand operators had no idea of what was really going on and probably didn't care. There were, however, dedicated science types who were investigating such matters and who made major advances in the understanding of the subject. Three of these men were Faraday, Henry, and Lenz.
Faraday is credited with the Law on Induction but Henry was hot on his heels. In the end Henry is the better known since the unit of inductance, L, is in fact his name. In counter point the unit of capacitance, C, is the farad after Faraday but few of us would likely make that connection.
Lenz and Faraday were in close competition to determine the direction of induced emf's but Lenz formulated the more succinct explanation or "law" hence it bears his name.
These are only a few of the many who replaced the parlor "wow" with a quantitative understanding of just what was going on. Those early days must have been exciting times. For a good read check out JJ Thomson's electronic charge to mass experiment and Millikan's oil drop experiment. Such cleverness and simplicity, the results of which affect our lives today.
If you are interested in early electrical apparatus and all things pertaining to radio the you must visit the world class radio museum in Bellingham, WA. Check this link to see just a portion of the equipment housed there.
www.californiahistoricalradio.com/photos43.html
Let's start this inductance saga with a description of two pieces of apparatus used by Faraday to ascertain his findings.
First we need to talk about the galvanometer. This is an indicating device which responds to electrical current which we know as moving charge. In its simplest form it can be a magnetic compass in close association with a length of wire. This is a qualitative rather than a quantitative gadget. We just want it to tell us when charge is moving rather than how much charge is moving. There is an aspect of moving charge which I have bypassed to this point since it did not figure into the earlier series parts but it is essential to our discussion now so here goes. Stationary charge only has an electric field in contrast to moving charge which has both a magnetic and electric field. We know that magnetic fields can attract and repel depending on circumstances and which is the basis of the electric motor. Now imagine a wire alongside of a magnetic compass. If, by any means, we cause an electrical current to flow in the wire then a magnetic field will form around the wire and interact with the compass (and the Earth's magnetic field) causing the compass to deflect. To a degree this is quantitative since larger currents will cause larger deflections.
Now, the first of Faraday's circuits is a simple loop of wire connected to a galvanometer which shows no deflection hence no moving charge, or current, in the circuit. Now fetch a bar magnet and poke the north end through the loop of wire in a direction normal to the plane of the loop. The term "normal" signifies that all angles around the bar magnet are 90 degrees to the plane of the loop of wire or coil. As the bar magnet moves relative to the coil the galvanometer will deflect to one side. When the magnet stops relative to the coil then the deflection ceases. Withdraw the magnet and the galvanometer deflects in the opposite direction. The point here is that no relative motion equates with no deflection.
The second of Faraday's circuits is slightly more complicated. The first circuit as described above is augmented with a second circuit consisting of a loop of wire in series with a battery, resistor, and simple on/off switch. Arrange the two, distinct, circuits such that the wire loops are close, parallel, and coaxial. There is no physical, or electrical, connection between these two circuit coils. Closing the switch will initiate current flow in the second loop from zero to that current determined by the battery and resistor per Ohm's Law. This apparent step change of current is, in reality, a steep ramp from zero to maximum. An observer watching the galvanometer associated with the first loop will notice a deflection when the switch initiates the steep current ramp. When the current becomes steady the galvanometer shows no deflection. In like manner an observer will notice a deflection when the switch is opened and the current returns to zero. The point here is that steady, unchanging, current causes no galvanometer deflection.
From the perspective of circuit 1, the galvanometer and loop, it is impossible to tell if the galvanometer deflection is from a moving bar magnet or from a transient in the closely associated loop of circuit 2.
Faraday deduced from these two circuits that the common connection was changing magnetic flux which we call phi from the Greek alphabet. So, Faraday's Law of Magnetic Induction is given by emf = - delta phi divided by delta time. In other words the generated emf is minus 1 times the changing magnetic flux divided by the time. It is not the magnitude of the current producing the flux that is important rather how fast the current hence the flux changes. Magnetic induction is the basis for rotating alternators as in the generating equipment at Grand Coulee.
Now, suppose that you have a coil of two loops instead of 1 loop. If the loops are tightly packed such that both see the same changing flux then the induced emf will be twice that of 1 loop. In fact this can be generalized to N loops where N can be any number and the Faraday Law of Induction becomes emf = - delta (N times phi) divided by delta time. The expression N, or number of loops, times phi, or flux, is known as the number of flux linkages.
The final historical note deals with Lenz's Law which states: The induced current will appear in such a direction that it opposes the change that produced it. So, in circuit 1 when you push a north pole into the loop the loop produces a north pole which opposes the motion. If you withdraw a north pole from the loop then the loop produces a south pole which opposes the withdrawal. Note that this is not true for open circuits... current must be flowing to observe this behavior.
So, what is inductance? It turns out that the number of flux linkages given above is actually equal to L times I where L is a constant of proportionality called inductance. Plugging this new information back into Faraday's Law yields the familiar emf = -L times the rate of current change. Induced voltage then depends on two things... the value of inductance and how fast the current through the inductance changes. The unit of inductance is the volt-sec per ampere. So 1 Henry = 1 volt-sec per ampere. One Henry is a large unit so more common and useful measures are millihenry and microhenry.
In summary then... the effect of inductance is to stubbornly resist any change in circuit current. This can only happen provided that enough energy is stored in the magnetic field to manage the situation. One good example of this effect is the automobile ignition coil. When the breaker points are closed then a large current flows in the primary coil circuit. When the points open the stored energy in the magnetic field makes a valiant attempt to keep the current flowing by collapsing very rapidly. The very large number of secondary turns experience a rapid flux change with the result being a very high induced voltage at the spark plug. Be aware that this is a bit over simplistic since the capacitor across the points normally thought to only "protect" the points actually resonates with the primary to produce a much "fatter", hotter, and pink spark discharge. This is a subject for another time.
So far the discussion has been limited to transient changes where a bar magnet is momentarily pushed or a switch has been closed and the circuit goes from one steady state to another. The general case to consider is that of constantly changing current as produced by AC circuits driven by a sinusoidal source. We previously considered a capacitor in series with a resistor and how the capacitive reactance behaved in concert with a resistor. The same sort of behavior occurs when an inductor and resistor are in series and driven by an AC source. Whereas the capacitor stores energy in the form of an electric field, the inductor stores energy in the magnetic field. Whereas the reactance associated with the capacitor is inversely related to the driving frequency, the reactance associated with the inductor is directly related to the driving frequency. Whereas the net circuit phase angle for the capacitor is leading, the net phase angle for the inductor is lagging. The same right triangle geometry is used to calculate impedance in both cases... vertical axis representing reactance whether capacitive or inductive and horizontal axis representing resistance in both cases. The hypotenuse represents the impedance. Just be aware that inductive reactance is plotted in quadrant 1 and capacitive reactance is plotted in quadrant 4 in x/y space.
Given that inductive reactance increases with frequency and capacitive reactance decreases with frequency there is the possibility that at some frequency they may be equal in magnitude. I have not stressed the point that vectors are involved here so let me assert that XsubL or inductive reactance is represented by a vertical vector pointing up and that XsubC or capacitive reactance is represented by a vector pointing down. If the reactance values are equal in magnitude and opposite in sign then the sum is zero. At this special frequency the circuit is said to be resonant and the net reactance goes to zero and the impedance is purely resistive. This is an example of series resonance where the capacitor and inductor are in series and the lowest circuit impedance occurs at resonance. Capacitors and inductors can also be connected in parallel fashion. Such an arrangement is frequently called a "tank" especially if associated with the plate of a vacuum tube. If you hear the expression "plate tank" then you will know that it is a parallel combination of capacitance and inductance. If the Q is high enough... analogous to losses are low... then the parallel tank operates mathematically much the same as the series except that the circuit impedance is highest at resonance. Hence "dipping" the plate current by tuning the "tank" really boils down to maximizing the circuit impedance at a given frequency which will minimize the plate current. Again, a subject for another time.
This concludes the set up for the discussion of reactance associated with inductance and marks the end of this impedance series. Are there any questions or comments?
Terminology
Resonance, series: The special frequency where net reactance is zero, circuit impedance is resistive, and minimum.
Transient: A momentary perturbation of normally steady state conditions.
Radian: The angle formed when the length of circle circumference is equal to circle radius.
Angular frequency: Radians per second given by 2pi times frequency in cycles per second. Hence 1 cycle per second is equal to 2pi radians per second.
The last challenge question answer:
If you have equal values of resistance and reactance what is the overall circuit phase angle? 45 degrees since the geometric figure is a square.
This is N7KC for the Wednesday night Educational Radio Net.
Labels:
Faraday,
galvanometer,
Henry,
inductance,
Lenz,
Leyden,
phi,
Wimshurst
Subscribe to:
Posts (Atom)