[ Home ] [ Up ] [ Previous Page ] [ Next Page ]
Most of us use some form of electric power to run our trains. Whether it comes from a battery or the track, we use one or more small DC motors to convert the electrical power to mechanical work to run the trains. The subject of this page is the care and feeding of these motors.
I was going to write a short tutorial on small DC motors, that is until I found an excellent one already on the net. MicroMo, a major manufacturer of motors, has posted a tutorial at Small DC Motors . If you are interested in the operation and construction of small DC motors, I strongly recommend that you read these pages.
[ Top ]
There are hundreds of types of DC motors, from micropower coreless types ranging through traction motors that generate hundreds or thousands of horsepower. The two most common types used in large scale trains are the coreless motor and the conventional iron core can motor.
The can motor gets its name from the formed steel can it is built in. These are permanent magnet DC motors and usually come with the armature wound on an iron core. They are capable of generating an amazing amount of power in a small physical size. Their major important attributes are that they are rugged and cheap. Can motors are usually not designed to be disassembled and repaired. When a motor burns out or wears out, it is cheap enough to simply toss out and replace. Most of the motors found in Large Scale locos are iron core can motors.
Coreless motors get their name from the fact that there is no iron core in the armature. Instead of being supported by an iron core, the windings are held together in a rigid structure by a thermosetting plastic. The advantage of the coreless motor is that it can be made smaller and lighter than a cored design although it may still look like a can motor. Coreless motors tend to be more efficient than cored motors because they don't have eddy current losses and there are no "dead ends" on the winding loops that contribute resistance but no torque. The downside, and its a major one, is that the coreless motor doesn't have the thermal stability of a cored motor with its large iron heat sink. When the motor is abused, it will heat very rapidly and when it gets hot enough to breakdown the adhesives holding it together, the armature will simply come apart. Coreless motors are found on some of the smaller Large Scale locos, such as the LGB Chloe. If treated properly, the motors will last a long time, if overloaded for only a short time, they are history.
Besides being called can motors, there are other characteristics that may be of interest. The number of "poles" on the motor determines to some extent the smoothness of the motor operation. Can motors typically come in 3, 5 or 7 pole configurations. The more poles there are, the smoother the motor will run. More poles also result in more expensive construction and a secondary impact is that a motor with more poles might be a little less efficient. Typical R/C car motors have 3 poles, smooth operation at low speed not being a factor. Instead, lots of room is desired to allow big wires to be wound around the pole pieces to allow the generation of lots of torque. More poles leaves less room for windings. Many Large Scale can motors are 5 pole types. These have much smoother torque characteristics than 3 pole motors, but have less room for windings. This is OK for the power requirement of Large Scale trains. Many high quality motors, such as found in LGB engines, are 7 pole motors and they are smoother yet, although the difference between 5 and 7 poles is not large. Motors with any number of poles might also be "skew wound," that is the gaps between the armature poles are slanted with respect to the fixed pole pieces. This tends to make a motor more expensive, but also reduces the tendency of the motor to "cog" or have a preferred angular position. Skew winding allows a motor to run smoother at very low speeds.
[ Top ]
If you are really interested in the internal workings of a DC motor, then read the tutorial at MicroMo. However, this is the quick and dirty version. A small DC motor generates torque by creating an interaction between a fixed and rotating magnet field. The fixed field is supplied by high energy permanent magnets. The rotating field is created by passing a DC current through several different windings on the armature (rotating part) and timing which winding is powered through a device called a commutator. Power is applied to the armature by brushes which ride on the commutator.
To understand how a motor responds to load, the motor itself is modeled by dividing it up into three major components. These components are the ideal motor, a back-EMF generator and parasitic resistance. These parts are really not physically separable, but for modeling purposes this is convenient.
The ideal motor is what is left after all the bad stuff is taken out. This motor would be lossless and would run at 100% efficiency at zero input voltage. The parasitic resistance is the bad stuff, winding resistance, brush resistance and such amount to a resistor in series with the motor.
The back-EMF generator is not a bad thing, as a matter of fact, it is highly desirable. Any DC motor can be turned into a DC generator simply by turning the shaft. When a motor turns due to applied voltage, it also generates back-EMF which is arranged to oppose the applied voltage. If the parasitic resistance was zero and the motor had no load or frictional losses, the back-EMF generated would equal the applied voltage at any speed. This would cause the motor current to be zero and result in a classical perpetual motion machine.
Since parasitic resistance and frictional losses are not zero, the real motor doesn't turn quite as fast as ideal motor would. This makes the back-EMF somewhat less than the applied voltage. The difference between the applied voltage and the back-EMF is the net voltage that the motor actually sees. As the mechanical load on the motor is increased, and the motor slows down, the back-EMF is reduced and the net voltage that the motor sees increases. It turns out that the actual motor current is determined by the difference between the applied voltage and the back-EMF divided by the parasitic resistance. When the motor speed is zero and therefore the back-EMF is zero (motor stalled) the only thing that controls the current is the parasitic resistance. Since the parasitic resistance is small, the resultant current is much higher than it would be under normal running conditions.
This graph shows the relationship between motor current and load. At no load, the motor is able to turn quite fast allowing the back-EMF to almost equal the applied voltage. The motor draws a low current that is not very sensitive to applied voltage because as applied voltage increases and the motor speed increases, the back-EMF increases too. As the load is increased and the motor slows, the motor draws more current at any applied voltage. In this test case, I cannot be sure that the mechanical load applied to the motor was constant at all speeds, but the graph still serves to show that increasing load causes increasing current. At maximum load, when the motor is stalled, the current in the motor is controlled only by the parasitic resistance, 7 ohms for this typical can motor. The resistor line shows what a 7 ohm resistor would do under the same conditions.
The effect of back-EMF is to regulate motor speed. As the motor is unloaded and its speed is allowed to increase, eventually it gets going fast enough to generate enough back-EMF to nearly buck the applied voltage and the motor current and therefore torque drops off and the motor is prevented from turning faster. If a load is applied that slows the motor, the back-EMF is reduced and more current flows, increasing the torque to better handle the load.
As the motor draws more current, it heats up more. Heat is the chief killer of all things electronic and electromechanical. The moral here is that if you load up your engines, they will draw more current and the motors will run hotter. At some point, the motor temperature may get hot enough to unsolder the connections inside the motor, burn off the insulation on the windings or burn the oil lubricating the shafts. These are bad things.
Another cause of rapid motor failure is if the magnets reach their Curie temperature. This is the temperature at which the magnets lose their magnetic field. Without a functioning field magnet, the motor will not be able to generate torque and it will stall. The current will increase rapidly and the motor will physically burn up. If you see your engines suddenly slow down, shut off the power immediately and give them a rest. If the motor hasn't been physically destroyed, it may recover when it cools down.
I'm not sure of the reason, but coreless motors do not work very well in combination with Aristo's Pulse Width Control. Perhaps the reduced inductance of the windings, due to the lack of an iron core, allows too much current to flow during the pulses when the motor is running at low speed and causes the motor to overheat. In any event, if you use an engine with a coreless motor, you should not use PWC.
Selection of the "proper" motor to use in any given application is pretty complicated. There are lots of parameters to be considered, many of them are contradictory. For example, USAT has selected a very large, very low stall resistance and high inertia motor to use in many of their locomotives. IMHO, these motors are "sub-optimized" for use on DC track powered layouts. The upside of this selection is that the motors can draw enough current at low motor speed to produce some serious torque. They start smoothly under high loads at very low track voltages. They also produce a lot of power, more than the locos actually need, and are in little danger of overheating due to being overloaded. One downside is that the motors draw more current than needed at all voltages. Another, more serious, downside is that the motors are less suited to pulse width control as implemented in virtually all DCC decoders and radio control receivers. The motors also have lots of rotational inertia, due to the large armature, and they spool up slowly. The low stall resistance results in very high current pulses when the motor is just being started AND the high inertia extends the time at which the motor is running slowly thus extended the time that the motor is not generating any BEMF and the current pulses are high. This is why some DCC decoder and radio control receiver manufacturers specifically EXCLUDE the use of their equipment on USAT locos.
[ Top ]
It is possible to take advantage of the back-EMF generated by a motor to help hold train speed constant. Since the value of the back-EMF is determined by the speed of a motor which is in turn determined by the speed of a train, a power source that detects and regulates back-EMF will tend to hold a fairly constant train speed under varying load, such as climbing and descending grades.
I believe that this is the system used by MRC power packs that claim to have the "PTC" or Positive Tracking Control feature. PTC does appear to work. I use an MRC power pack and an Aristo Train Engineer on my indoor layout and the MRC pack does control train speeds better, especially on the downgrades. Some DCC decoders also use back-EMF detection.
In principle, back-EMF detection should be fairly easy. All that is needed is to interrupt power to the track for a short time and then sense the voltage placed on the track by the spinning motors. The power pack then tries to keep the back-EMF constant by automatically adjusting the track voltage.
Many DCC decoders and some radio control receivers use BEMF regulation to control the speed of a motor as the load varies. The motor controller can sense the BEMF by quickly interrupting the voltage to the motor and then looking to see what voltage the motor is making. This has to happen fast or the motor will slow down. It also has to happen relatively infrequently (maybe 100 times a second or so) or the loss of motor voltage will result in less power being delivered to the motor. With some "silent running" motor controllers, the period of the BEMF detection can be heard as a low hum even though the decoder is advertised as "silent." This hum will usually go away if BEMF control is turned off.
BEMF control is implemented with a classic control loop and it follows classical control loop theory which is described by some pretty straightforward mathematics. Since the detection of the BEMF signal has some latency due to the low sampling rate of the generated BEMF and DC motors have rotating inertia that slows their response to changes in applied voltage, these control loops have to operate slowly and they have to be compensated to prevent loop instability.
There are typically three parameters that a BEMF control loop uses to determine it's response. In a typical DCC decoder, these are usually user selectable in the form of Configuration Variables (CVs). Each manufacturer names them differently and can put them in different CVs because BEMF control is a "feature" of a decoder, NOT a defined part of the DCC standard.
The three important parameters are:
Target BEMF. The target BEMF is the actual amount of BEMF that is expected at full throttle. It is characterized in volts and may typically be 75 to 85% of the applied voltage depending on the motor efficiency. The decoder will scale the target value at lower throttle settings. If the target BEMF parameter in the decoder is set too low, the motor will not reach full speed. If the target value is too high, then the motor will top out BEFORE full throttle is reached. The plan is to set the target so that the loco reaches it maximum speed capability just when the throttle is set to maximum. In classical control theory, this is called the "reference."
Loop "Tightness". The control loop needs to know how important that absolute speed control is. In classical control loop theory, this is the "gain" of the loop. With the gain set very high, the decoder will try very hard to set the BEMF right at it's target value. This may produce "hard" speed characteristics where the speed abruptly changes from one value to the next as the throttle is turned. If two locos are MU'd, both with "hard" loops, then each loco will try to tightly regulate speed. It is not likely that the two locos will have selected exactly the same speed and the locos will tend to buck each other. When BEMF is used in an MU environment, it is best to set one loco with BEMF running and turn BEMF off in the others. If the gain of the loop is turned down, the decoder will be less picky about regulating to the exact target BEMF. This will allow the loco speed to change more gracefully. Generally, the lowest gain setting that provides acceptable performance is the right one.
Loop Response Time. The third parameter controls the RATE at which the loop responds. If the loop response is too quick, the loop will be "under compensated" in classical control loop theory and speed changes may overshoot and ring resulting in oscillating changes in speed. In the worst case, the loop may become actually unstable and the speed may change rapidly and continuously. This parameter is usually adjusted by giving the loco a step change command in speed of maybe 10% of full throttle and watching the response. If the response is too quick, the loco will not settle smoothly to the new speed and may jerk some. If the loop response is too slow, the loco may take it's good time to settle to the new speed. The correct response is a smooth transition between speeds with no obvious overshoot from the desired speed.
While BEMF control can regulate locomotive speeds, there are practical limits. BEMF works best at the low speed end. It's biggest impact is to maintain a low engine speed. The second best place it works is to regulate overspeed when going downgrade. A BEMF decoder can help prevent runaways with locos that tend to speed up going downhill. BEMF typically is the least helpful in regulating loaded or uphill speed. Highly loaded locomotive performance is usually limited by the motor itself. If a loco has the inherent capability to go very fast under high loads, then it probably doesn't need BEMF to regulate it's loaded speed, the inherent speed regulation characteristics of a loaded motor will do that anyway. BEMF is at it's worst with MU. In this case, either BEMF should be turned off on all but one loco or the gain of EACH loco's BEMF loop should be turned down.
This page has been accessed times since May 16, 1998.
©1998-2009 George Schreyer
Created May 16, 1998
Last Updated January 21, 2009