Top Banner
Aeroelasticity Aeroelasticity is the science which studies the interaction among inertial, elastic, and aerodynamic forces. It was defined by Collar in 1947 as "the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design." Introduction Airplane structures are not completely rigid, and aeroelastic phenomena arise when structural deformations induce changes on aerodynamic forces. The additional aerodynamic forces cause increasing of the structural deformations, which leads to greater aerodynamic forces in a feedback process. These interactions may become smaller until a condition of equilibrium is reached, or may diverge catastrophically. Aeroelasticity can be divided in two fields of study: steady and dynamic aeroelasticity. Steady aeroelasticity Steady aeroelasticity studies the interaction between aerodynamic and elastic forces on an elastic structure. Mass properties are not significant in the calculations of this type of phenomena. Divergence Divergence occurs when a lifting surface deflects under aerodynamic load so as to increase the applied load, or move the load so that the twisting effect on the structure is increased. The increased load deflects the structure further, which brings the structure to the limit loads (and to failure). Control surface reversal Main article: Control reversal Control surface reversal is the loss (or reversal) of the expected response of a control surface, due to structural deformation of the main lifting surface. Dynamic aeroelasticity Dynamic Aeroelasticity studies the interactions among aerodynamic, elastic, and inertial forces. Examples of dynamic aeroelastic phenomena are:
180

AEROSPACE

Oct 31, 2014

Download

Documents

AEROELASTICITY A FIELD IN VAST VIBRATIONAL TECHNOLOGY
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: AEROSPACE

AeroelasticityAeroelasticity is the science which studies the interaction among inertial, elastic, and aerodynamic forces. It was defined by Collar in 1947 as "the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design."IntroductionAirplane structures are not completely rigid, and aeroelastic phenomena arise when structural deformations induce changes on aerodynamic forces. The additional aerodynamic forces cause increasing of the structural deformations, which leads to greater aerodynamic forces in a feedback process. These interactions may become smaller until a condition of equilibrium is reached, or may diverge catastrophically.Aeroelasticity can be divided in two fields of study: steady and dynamic aeroelasticity.Steady aeroelasticitySteady aeroelasticity studies the interaction between aerodynamic and elastic forces on an elastic structure. Mass properties are not significant in the calculations of this type of phenomena.DivergenceDivergence occurs when a lifting surface deflects under aerodynamic load so as to increase the applied load, or move the load so that the twisting effect on the structure is increased. The increased load deflects the structure further, which brings the structure to the limit loads (and to failure).Control surface reversal

Main article: Control reversalControl surface reversal is the loss (or reversal) of the expected response of a control surface, due to structural deformation of the main lifting surface.

Dynamic aeroelasticityDynamic Aeroelasticity studies the interactions among aerodynamic, elastic, and inertial forces. Examples of dynamic aeroelastic phenomena are:FlutterFlutter is a self-starting and potentially distructive vibration where aerodynamic forces on an object couple with a structure's natural mode of vibration to produce rapid periodic motion. Flutter can occur in any object within a strong fluid flow, under the conditions that a positive feedback occurs between the structure's natural vibration and the aerodynamic forces. That is, that the vibrational movement of the object increases an aerodynamic load which in turn drives the object to move further. If the energy during the period of aerodynamic excitation is larger than the natural damping of the system, the level of vibration will increase. The vibration levels can thus build up and are only limited when the aerodynamic or mechanical damping of the object match the energy input, this often results in large amplitudes and can lead to rapid failure. Because of this, structures exposed to aerodynamic forces - including wings, aerofoils, but also chimneys and bridges - are designed carefully within known parameters to avoid flutter.In complex structures where both the aerodynamics and the mechanical properties of the structure are not fully understood flutter can only be discounted through detailed

Page 2: AEROSPACE

testing. Even changing the mass distribution of an aircraft or the stiffness of one component can induce flutter in an apparently unrelated aerodynamic component. At its mildest this can appear as a "buzz" in the aircraft structure, but at its most violent it can develop uncontrollably with great speed and cause serious damage to or the destruction of the aircraft. The following link [[1]] shows a visual demonstration of flutter which destroys an RC aircraft.Flutter can also occur on structures other than aircraft. One famous example of flutter phenomena is the collapse of the Tacoma Narrows Bridge.Dynamic responseDynamic response or forced response is the response of an object to changes in a fluid flow such as aircraft to gusts and other external atmospheric disturbances. Forced response is a concern in axial compressor and gas turbine design, where one set of aerofoils pass through the wakes of the aerofoils upstream.BuffetingBuffeting is a high-frequency instability, caused by airflow separation or shock wave oscillations from one object striking another. It is a random forced vibration.Other fields of studyOther fields of physics may have an influence on aeroelastic phenomena. For example, in aerospace vehicles, stress induced by high temperatures is important. This leads to the study of aerothermoelasticity. Or, in other situations, the dynamics of the control system may affect aeroelastic phenomena. This is called aeroservoelasticity.Prediction and cureAeroelasticity involves not just the external aerodynamic loads and the way they change but also the structural, damping and mass characteristics of the aircraft. Prediction involves making a mathematical model of the aircraft as a series of masses connected by springs and dampers which are tuned to represent the dynamic characteristics of the aircraft structure. The model also includes details of applied aerodynamic forces and how they vary.The model can be used to predict the flutter margin and, if necessary, test fixes to potential problems. Small carefully-chosen changes to mass distribution and local structural stiffness can be very effective in solving aeroelastic problems.************************************************************************Aerospace Engineering History HistoryOne person who was important in developing aviation was Alberto Santos Dumont, a pioneer who built the first machines that were able to fly. Some of the first ideas for powered flight may have come from Leonardo da Vinci, who, although he did not build any successful models, did develop many sketches and ideas for "flying machines".

Page 3: AEROSPACE

Orville and Wilbur Wright flew the Wright Flyer I, the first airplane, on December 17, 1903 at Kitty Hawk, North Carolina.The origin of aerospace engineering can be traced back to the aviation pioneers around the late 19th century to early 20th centuries, although the work of Sir George Cayley has recently been dated as being from the last decade of the 18th century. Early knowledge of aeronautical engineering was largely empirical with some concepts and skills imported from other branches of engineering. Scientists understood some key elements of aerospace engineering , like fluid dynamics, in the 18th century. Only a decade after the successful flights by the Wright brothers, the 1910s saw the development of aeronautical engineering through the design of World War I military aircraft.The first definition of aerospace engineering appeared in February 1958. The definition considered the Earth's atmosphere and the outer space as a single realm, thereby encompassing both aircraft (aero) and spacecraft (space) under a newly coined word aerospace. The National Aeronautics and Space Administration was founded in 1958 as a response to the Cold War. United States aerospace engineers sent the American first satellite launched on January 31, 1958 in response the USSR launching Sputnik.**********************************************************************

Aerospace Engineering Overview Aerospace EngineeringAerospace engineering is the branch of engineering behind the design, construction and science of aircraft and spacecraft. Aerospace engineering has broken into two major branches, aeronautical engineering and astronautical engineering. The former deals with craft that stay within Earth's atmosphere, and the latter deals with craft that operate outside of Earth's atmosphere. While "aeronautical" was the original term, the broader "aerospace" has superseded it in usage, as flight technology advanced to include craft operating in outer space.Aerospace engineering is often informally called rocket science. Overview Modern flight vehicles undergo severe conditions such as differences in atmospheric pressure and temperature, or heavy structural load applied upon vehicle components. Consequently, they are usually the products of various technologies including aerodynamics, avionics, materials science and propulsion. These technologies are collectively known as aerospace engineering. Because of the complexity of the field, aerospace engineering is conducted by a team of engineers, each specializing in their own branches of science., The development and manufacturing of a flight vehicle demands careful balance and compromise between abilities, performance, available technology and costs.********************************************************************

Aircraft structures Aircraft structures are the structures, large and small, common or uncommon, that make up aircraft of any sort, size, or purpose.Purpose

Page 4: AEROSPACE

Structures fulfill a purpose in an aircraft, either simple or complex. Each sub-structure interfaces with the other structures in the same aircraft. Ultimately parts work together to accomplish safe flight.[citation needed]

ClassificationGeneralAircraft structures may be classified by any of the following general categories:[citation needed]

purpose integration with other structures and the aircraft as a whole history of the structure problems and successes of the structure value to the particular aircraft cost supply manufacturer wear characteristics safety quotient popularity specified use hazards relative to the structure inspection challenges maintenance replacement protocol.

By type of wing

Aircraft structures may be classified by the type of wing employed, as this dictates much of the supporting structure:[citation needed]

Single planar winged Non-planar winged Biplane Triplane Ring winged Spanwise rotary winged Vertical rotary axis winged Morphable wing Flexible winged Rigid winged Flying wing parachutes and dogues Lifting bodies Winged man system Reentry-from-space vehicle

Page 5: AEROSPACE

Classic aircraft structures

Classic aircraft components:[citation needed]

Wing (skins, spars, ribs) Fuselage (skin, bulkhead, frame, heavy frames and bulkheads) Control system Thrust system Empennage Stringers or longerons Spars Landing system Launching system Accessory structures on board

The interaction of these structural components with mechanical systems may include:[citation needed]

Undercarriage Ejection seat Powerplant

The locations of major components and systems will optimise the aircraft's weight and strength. For example in most modern military jets the heavy frame in the fuselage that supports the nose undercarriage also has the ejector seat rail mounted to it. In this way the frame has multiple functions, thus reducing weight and cost.[citation needed]

The location of structural components is also important with respect to the aircraft's center-of-gravity, which has great effect on the aircraft's stability.

The materials and manufacturing techniques of the structural components are optimized during the design process. For example, stringers may be manufactured by bending sheet metal or by extrusion to optimize weight and cost, whereas a robust frame that supports a heavy component such as an engine may be a cast or machined to optimize strength.[citation needed]

Regulatory requirements

Applicable national airworthiness regulations that specify structural requirements will affect the choice of materials.

************************************************************

Page 6: AEROSPACE

Astrodynamics Orbital mechanicsOrbital mechanics or astrodynamics is the study of the motion of rockets and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and Newton's law of universal gravitation, collectively known as classical mechanics.Celestial mechanics focuses more broadly on the orbital motions of artificial and natural astronomical bodies such as planets, moons, and comets. Orbital mechanics is a subfield[citation needed] which focuses on spacecraft trajectories, including orbital maneuvers, orbit plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsion.General relativity provides more exact equations for calculating orbits, sometimes necessary for greater accuracy or high-gravity situations (such as orbits close to the Sun).

Rules of thumb

Laws of astrodynamics

Historical approaches

Practical techniques

Modern mathematical techniques

Rules of thumb Rules of thumbThe following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.

Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold in the absence of thrust:

o Orbits are either circular, with the planet at the center of the circle, or form an ellipse, with the planet at one focus.

Page 7: AEROSPACE

o A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.

o The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.

Without firing a rocket engine (generating thrust), the height and shape of the satellite's orbit won't change, and it will maintain the same orientation with respect to the fixed stars.

A satellite in a low orbit (or low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.

If a brief rocket firing is made at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus to move from one circular orbit to another, at least two brief firings are needed.

From a circular orbit, a brief firing of a rocket in the direction which slows the satellite down, will create an elliptical orbit with a lower perigee (lowest orbital point) at 180 degrees away from the firing point, which will be the apogee (highest orbital point). If the rocket is fired to speed the rocket, it will create an elliptical orbit with a higher apogee 180 degrees away from the firing point (which will become the perigee).

The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecraft are in the same circular orbit and wish to dock, unless they are very close, the trailing craft cannot simply fire its engines to go faster. This will change the shape of its orbit, causing it to gain altitude and miss its target. One approach is to actually fire a reverse thrust to slow down, and then fire again to re-circularize the orbit at a lower altitude. Because lower orbits are faster than higher orbits, the trailing craft will begin to catch up. A third firing at the right time will put the trailing craft in an elliptical orbit which will intersect the path of the leading craft, approaching from below.

To the degree that the assumptions do not hold, actual trajectories will vary from those calculated. Atmospheric drag is one major complicating factor for objects in Earth orbit. The differences between classical mechanics and general relativity can become important for large objects like planets. These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system.

Laws of astrodynamics

Laws of astrodynamics

Page 8: AEROSPACE

The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is his differential calculus.

Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.

Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws.

Escape velocity

The formula for escape velocity is easily derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by

while the specific kinetic energy of an object is given by

Since energy is conserved, the total specific orbital energy

does not depend on the distance, r, from the center of the central body to the space vehicle in question. Therefore, the object can reach infinite r only if this quantity is nonnegative, which implies

The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the solar system from the vicinity of the Earth requires around 42 km/s velocity, but there will be "part credit" for the Earth's orbital velocity for spacecraft launched from

Page 9: AEROSPACE

Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.

Formulae for free orbits

Orbits are conic sections, so, naturally, the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:

.

The parameters are given by the orbital elements.

Circular orbits

Although most orbits are elliptical in nature, a special case is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M is

where G is the gravitational constant, equal to

6.672 598 × 10−11 m3/(kg·s2)

To properly use this formula, the units must be consistent; for example, M must be in kilograms, and r must be in meters. The answer will be in meters per second.

The quantity GM is often termed the standard gravitational parameter, which has a different value for every planet or moon in the solar system.

Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by the square root of 2:

Historical approaches

Historical approaches

Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. The fundamental techniques, such as those

Page 10: AEROSPACE

used to solve the Keplerian problem, are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.

Kepler's equation

Kepler was the first to successfully model planetary orbits to a high degree of accuracy.

Derivation

To compute the position of a satellite at a given time (the Keplerian problem) is a difficult problem. The opposite problem—to compute the time-of-flight given the starting and ending positions—is simpler. We present a derivation for the time-of-flight equation here.

Kepler's construction for deriving the time-of-flight equation. The bold ellipse is the satellite's orbit, with the star or planet at one focus Q. The goal is to compute the time required for a satellite to travel from periapsis P to a given point S. Kepler circumscribed the blue auxiliary circle around the ellipse, and used it to derive his time-of-flight equation in terms of eccentric anomaly.The problem is to find the time t at which the satellite reaches point S, given that it is at periapsis P at time t = 0. We are given that the semimajor axis of the orbit is a, and the semiminor axis is b; the eccentricity is e, and the planet is at Q, at a distance of ae from the center C of the ellipse.The key construction that will allow us to analyse this situation is the auxiliary circle (shown in blue) circumscribed on the orbital ellipse. This circle is taller than the ellipse by a factor of a / b in the direction of the minor axis, so all area measures on the circle are magnified by a factor of a / b with respect to the analogous area measures on the ellipse.Any given point on the ellipse can be mapped to the corresponding point on the circle that is a / b further from the ellipse's major axis. If we do this mapping for the position S of the satellite at time t, we arrive at a point R on the circumscribed circle. Kepler defines the angle PCR to be the eccentric anomaly angle E. (Kepler's terminology often refers to angles as "anomalies".) This definition makes the time-of-flight equation easier to derive than it would be using the true anomaly angle PQS.To compute the time-of-flight from this construction, we note that Kepler's second law allows us to compute time-of-flight from the area swept out by the satellite, and so we will set about computing the area PQS swept out by the satellite.First, the area PQR is a magnified version of the area PQS:

Furthermore, area PQS is the area swept out by the satellite in time t. We know that, in one orbital period T, the satellite sweeps out the whole area πab of the orbital ellipse.

Page 11: AEROSPACE

PQS is the t / T fraction of this area, and substituting, we arrive at this expression for PQR:

Second, the area PQR is also formed by removing area QCR from PCR:

Area PCR is a fraction of the circumscribed circle, whose total area is πa2. The fraction is E / 2π, thus:

Meanwhile, area QCR is a triangle whose base is the line segment QC of length ae, and whose height is asinE:

Combining all of the above:

Dividing through by a2 / 2:

To understand the significance of this formula, consider an analogous formula giving an angle M during circular motion with constant angular velocity n:

Setting n = 2π / T and M = E − esinE gives us Kepler's equation. Kepler referred to n as the mean motion, and E − esinE as the mean anomaly. The term "mean" in this case refers to the fact that we have "averaged" the satellite's non-constant angular velocity over an entire period to make the satellite's motion amenable to analysis. All satellites traverse an angle of 2π per orbital period T, so the mean angular velocity is always 2π / T.

Page 12: AEROSPACE

Substituting n into the formula we derived above gives this:

This formula is commonly referred to as Kepler's equation.

Application

With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of θ from periapsis is broken into two steps:

1. Compute the eccentric anomaly E from true anomaly θ2. Compute the time-of-flight t from the eccentric anomaly E

Finding the angle at a given time is harder. Kepler's equation is transcendental in E, meaning it cannot be solved for E analytically, and so numerical approaches must be used. In effect, one must guess a value of E and solve for time-of-flight; then adjust E as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.

The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity e is nearly 1, and plugging e = 1 into the formula for mean anomaly, E − sinE, we find ourselves subtracting two nearly-equal values, and so accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits at all. These difficulties are what led to the development of the universal variable formulation, described below.

Perturbation theory

One can deal with perturbations just by summing the forces and integrating, but that is not always best. Historically, variation of parameters has been used which is easier to mathematically apply with when perturbations are small.

Practical techniques

Practical techniques

Transfer orbits

Transfer orbits allow spacecraft to move from one orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.

Page 13: AEROSPACE

The Hohmann transfer orbit typically requires the least delta-v, but any orbit that intersects both the origin orbit and destination orbit may be used.

Gravity assist and the Oberth effect

In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different velocity. This is useful to speed or slow a spacecraft instead of carrying more fuel.

This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's Third Law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.

The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.

Interplanetary Transport Network and fuzzy orbits

It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the solar system. For example, it is possible to plot an orbit from high earth orbit to Mars, passing close to one of the Earth's Trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they are usually exceedingly slow, taking many years to arrive. In addition launch windows can be very far apart.

They have, however, been employed on projects such as Genesis. This spacecraft visited Earth's lagrange L1 point and returned using very little propellant.

Modern mathematical techniques Modern mathematical techniquesConic orbitsFor simple things like computing the delta-v for coplanar transfer ellipses, traditional approaches work pretty well. But time-of-flight is harder, especially for near-circular and hyperbolic orbits.The patched conic approximationThe transfer orbit alone is not a good approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behaviour

Page 14: AEROSPACE

of the spacecraft in the vicinity of a planet, so it severely underestimates delta-v, and produces highly inaccurate prescriptions for burn timings.One relatively simple way to get a first-order approximation of delta-v is based on the patched conic approximation technique. The idea is to choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighbourhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behaviour. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars.

The size of the "neighborhoods" (or spheres of influence) vary with radius rSOI:

where ap is the semimajor axis of the planet's orbit relative to the Sun; mp and ms are the masses of the planet and Sun, respectively.

This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.

The universal variable formulation

To address the shortcomings of the traditional approaches, the universal variable approach was developed. It works equally well on circular, elliptical, parabolic, and hyperbolic orbits; and also works well with perturbation theory. The differential equations converge nicely when integrated for any orbit.

Perturbations

The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors x0 and v0 at a given epoch t = 0. In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to

Page 15: AEROSPACE

determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).

However, perturbations cause the orbital elements to change over time. Hence, we write the position element as x0(t) and the velocity element as v0(t), indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions x0(t) and v0(t).

Non-ideal orbits

The following are some effects which make real orbits differ from the simple models based on a spherical earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.

Equatorial bulges cause precession of the node and the perigee Tesseral harmonics [1] of the gravity field introduce additional perturbations lunar and solar gravity perturbations alter the orbits Atmospheric drag reduces the semi-major axis unless make-up thrust is used

Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behaviour can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.

*********************************************************************

Control engineering BackgroundModern control engineering is closely related to electrical and computer engineering, as electronic circuits can often be easily described using control theory techniques. At many universities, control engineering courses are primarily taught by electrical and computer engineering faculty members. Previous to modern electronics, process control devices were devised by mechanical engineers using mechanical feedback along with pneumatic and hydraulic control devices, some of which are still in use today.The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program, and employs many of the same principles in control engineering.Other engineering disciplines also overlap with control engineering, as it can be applied to any system for which a suitable model can be derived.

Page 16: AEROSPACE

Control engineering has diversified applications that include science, finance management, and even human behaviour. Students of control engineering may start with a linear control system course which requires elementary mathematics and Laplace transforms (called classical control theory). In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z Transformations and algebra respectively, and could be said to complete a basic control education. From here onwards there are several sub branches.Control systemsControl engineering is the engineering discipline that focuses on the modelling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical many are and hence control engineering is often viewed as a subfield of electrical engineering. However, the falling price of microprocessors is making the actual implementation of a control system essentially trivial[citation needed]. As a result, focus is shifting back to the mechanical engineering discipline, as intimate knowledge of the physical system being controlled is often desired.Electrical circuits, digital signal processors and microcontrollers can all be used to implement Control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.Control engineers often utilize feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loopcontrol. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.*********************************************************************

Electrophony Electrical engineering, sometimes referred to as electrical and electronic engineering, is a field of engineering that deals with the study and application of electricity, electronics and electromagnetism. The field first became an identifiable occupation in the late nineteenth century after commercialization of the electric telegraph and electrical power supply. It now covers a range of subtopics including power, electronics, control systems, signal processing and telecommunications.Electrical engineering may or may not encompass electronic engineering. Where a distinction is made, usually outside of the United States, electrical engineering is considered to deal with the problems associated with large-scale electrical systems such as power transmission and motor control, whereas electronic engineering deals with the

Page 17: AEROSPACE

study of small-scale electronic systems including computers and integrated circuits. Alternatively, electrical engineers are usually concerned with using electricity to transmit energy, while electronic engineers are concerned with using electricity to transmit information.HistoryElectricity has been a subject of scientific interest since at least the early 17th century. The first electrical engineer was probably William Gilbert who designed the versorium: a device that detected the presence of statically charged objects. He was also the first to draw a clear distinction between magnetism and static electricity and is credited with establishing the term electricity. In 1775 Alessandro Volta's scientific experimentations devised the electrophorus, a device that produced a static electric charge, and by 1800 Volta developed the voltaic pile, a forerunner of the electric battery.However, it was not until the 19th century that research into the subject started to intensify. Notable developments in this century include the work of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor, Michael Faraday, the discoverer of electromagnetic induction in 1831, and James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.During these years, the study of electricity was largely considered to be a subfield of physics. It was not until the late 19th century that universities started to offer degrees in electrical engineering. The Darmstadt University of Technology founded the first chair and the first faculty of electrical engineering worldwide in 1882. In 1883 Darmstadt University of Technology and Cornell University introduced the world's first courses of study in electrical engineering, and in 1885 the University College London founded the first chair of electrical engineering in the United Kingdom. The University of Missouri subsequently established the first department of electrical engineering in the United States in 1886.During this period, the work concerning electrical engineering increased dramatically. In 1882, Edison switched on the world's first large-scale electrical supply network that provided 110 volts direct current to fifty-nine customers in lower Manhattan. In 1887, Nikola Tesla filed a number of patents related to a competing form of power distribution known as alternating current. In the following years a bitter rivalry between Tesla and Edison, known as the "War of Currents", took place over the preferred method ofdistribution. AC eventually replaced DC for generation and power distribution, enormously extending the range and improving the safety and efficiency of power distribution.The efforts of the two did much to further electrical engineering—Tesla's work on induction motors and polyphase systems influenced the field for years to come, while Edison's work on telegraphy and his development of the stock ticker proved lucrative for his company, which ultimately became General Electric. However, by the end of the 19th century, other key figures in the progress of electrical engineering were beginning to emerge.Modern developmentsEmergence of radio and electronics

Page 18: AEROSPACE

During the development of radio, many scientists and inventors contributed to radio technology and electronics. In his classic UHF experiments of 1888, Heinrich Hertz transmitted (via a spark-gap transmitter) and detected radio waves using electrical equipment. In 1895, Nikola Tesla was able to detect signals from the transmissions of his New York lab at West Point (a distance of 80.4 km / 49.95 miles). In 1897, Karl Ferdinand Braun introduced the cathode ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode. In 1895, Guglielmo Marconi furthered the art of hertzian wireless methods. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2100 miles. In 1920 Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934 the British military began to make strides towards radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.In 1941 Konrad Zuse presented the Z3, the world's first fully functional and programmable computer. In 1946 the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives, including the Apollo missions and the NASA moon landing.The invention of the transistor in 1947 by William B. Shockley, John Bardeen and Walter Brattain opened the door for more compact devices and led to the development of the integrated circuit in 1958 by Jack Kilby and independently in 1959 by Robert Noyce. In 1968 Marcian Hoff invented the first microprocessor at Intel and thus ignited the development of the personal computer. The first realization of the microprocessor was the Intel 4004, a 4-bit processor developed in 1971, but only in 1973 did the Intel 8080, an 8-bit processor, make the building of the first personal computer, the Altair 8800, possible.

Education

Electrical engineers typically possess an academic degree with a major in electrical engineering. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science, Bachelor of Technology or Bachelor of Applied Science depending upon the university. The degree generally includes units covering physics, mathematics, computer science, project management and specific topics in electrical engineering. Initially such topics cover most, if not all, of the sub-disciplines of electrical engineering. Students then choose to specialize in one or more sub-disciplines towards the end of the degree.

Page 19: AEROSPACE

Some electrical engineers also choose to pursue a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (EngD), or an Engineer's degree. The Master and Engineer's degree may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and various other European countries, the Master of Engineering is often considered an undergraduate degree of slightly longer duration than the Bachelor of Engineering.

Practicing engineers

In most countries, a Bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa ), Chartered Engineer (in India, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).

The advantages of certification vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, such as Australia, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law.

Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET) (which was formed by the merging of the Institution of Electrical Engineers (IEE) and the Institution of Incorporated Engineers (IIE). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of

Page 20: AEROSPACE

periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency.

In countries such as Australia, Canada and the United States electrical engineers make up around 0.25% of the labor force . Outside of these countries, it is difficult to gauge the demographics of the profession due to less meticulous reporting on labour statistics. However, in terms of electrical engineering graduates per-capita, electrical engineering graduates would probably be most numerous in countries such as Taiwan, Japan, India and South Korea.

Tools and work

From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunication systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances or the electrical control of industrial machinery.

Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.

Although most electrical engineers will understand basic circuit theory (that is the interactions of elements such as resistors, capacitors, diodes, transistors and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunication systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy and the ability to understand the technical language and concepts that relate to electrical engineering.

For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.

Page 21: AEROSPACE

The workplaces of electrical engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers and other engineers.

Sub-disciplines

Electrical engineering has many sub-disciplines, the most popular of which are listed below. Although there are electrical engineers who focus exclusively on one of these sub-disciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered separate disciplines in their own right.

Power

Power engineering deals with the generation, transmission and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. The future includes Satellite controlled power systems, with feedback in real time to prevent power surges and prevent blackouts.

Control

Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers electrical engineers may use electrical circuits, digital signal processors, microcontrollers and PLCs (Programmable Logic Controllers). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation.

Control engineers often utilize feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where

Page 22: AEROSPACE

there is regular feedback, control theory can be used to determine how the system responds to such feedback.

Electronics

Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example (of a pneumatic signal conditioner) is shown in the adjacent photograph.

Prior to the second world war, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio and early television. Later, in post war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers and microprocessors. In the mid to late 1950s, the term radio engineering gradually gave way to the name electronic engineering.

Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today.

Microelectronics

Microelectronics engineering deals with the design of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors, inductors) can be created at a microscopic level.

Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.

Signal processing

Page 23: AEROSPACE

Signal processing deals with the analysis and manipulations of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.

Telecommunications

Telecommunications engineering focuses on the transmission of information across a channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier wave in order to shift the information to a carrier frequency suitable for transmission, this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.

Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. If the signal strength of a transmitter is insufficient the signal's information will be corrupted by noise.

Instrumentation engineering

Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow and temperature. The design of such instrumentation requires a good understanding of physics that often extends beyond electromagnetic theory. For example, radar guns use the Doppler effect to measure the speed of oncoming vehicles. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.

Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control engineering.

Computers

Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware, the design of PDAs or the use of computers to

Page 24: AEROSPACE

control an industrial plant. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of devices including video game consoles and DVD players.

Related disciplines

Mechatronics is an engineering discipline which deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems and various subsystems of aircraft and automobiles.

The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as micro electromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.

Biomedical engineering is another related discipline, concerned with the design of medical equipment. This includes fixed equipment such as ventilators, MRI scanners and electrocardiograph monitors as well as mobile equipment such as cochlear implants, artificial pacemakers and artificial hearts.

******************************************************************

Flight test Flight testFlight test is a branch of aeronautical engineering that develops and gathers data during flight of an aircraft and then analyses the data to evaluate the flight characteristics of the aircraft and validate its design, including safety aspects. The flight test phase accomplishes two major tasks: 1) finding and fixing any aircraft design problems and then 2) verifying and documenting the aircraft capabilities for government certification or customer acceptance. The flight test phase can range from the test of a single new system for an existing aircraft to the complete development and certification of a new aircraft. Therefore the duration of a flight test program can vary from a few weeks to several years.Civil Aircraft Flight Test

Page 25: AEROSPACE

There are typically two categories of flight test programs – commercial and military. Commercial flight testing is conducted to certify that the aircraft meets all applicable safety and performance requirements of the government certifying agency. In the US, this is the Federal Aviation Administration (FAA); in Canada, Transport Canada (TC); in the United Kingdom (UK), the Civil Aviation Authority; and in the European Union, the Joint Aviation Authorities (JAA). Since commercial aircraft development is normally funded by the aircraft manufacturer and/or private investors, the certifying agency does not have a stake in the commercial success of the aircraft. These civil agencies are concerned with the aircraft’s safety and that the pilot’s flight manual accurately reports the aircraft’s performance. The market will determine the aircraft’s suitability to operators. Normally, the civil certification agency does not get involved in flight testing until the manufacturer has found and fixed any development issues and is ready to seek certification.Military aircraft Flight TestMilitary programs differ from commercial in that the government contracts with the aircraft manufacturer to design and build an aircraft to meet specific mission capabilities. These performance requirements are documented to the manufacturer in the Aircraft Specification and the details of the flight test program (among many other program requirements) are spelled out in the Statement of Work. In this case, the government is the customer and has a direct stake in the aircraft’s ability to perform the mission. Since the government is funding the program, it is more involved in the aircraft design and testing from early-on. Often military test pilots and engineers are integrated as part of the manufacturer’s flight test team, even before first flight. The final phase of the military aircraft flight test is the Operational Test (OT). OT is conducted by a government-only test team with the dictate to certify that the aircraft is suitable and effective to carry out the intended mission. Flight testing of military aircraft is often conducted at military flight test facilities. The US Navy tests aircraft at Naval Air Station Patuxent River, MD (a.k.a. “Pax River”) and the US Air Force at Edwards Air Force Base, CA. The U.S. Air Force Test Pilot School and the U.S. Naval Test Pilot School are the programs designed to teach military test personnel. In the UK most military flight testing is conducted by three organisations, the RAF, BAE Systems and Qinetiq. For minor upgrades the testing may be conducted by one of these three organisations in isolation,but major programs are normally conducted by a joint trials team (JTT), with all three organisations working together under the umbrella of an Integrated Project Team (IPT)Flight Test ProcessesFlight Testing is highly expensive and potentially very risky. Unforeseen problems can lead to damage to aircraft and loss of life, both of aircrew and people on the ground. For these reasons modern flight testing is probably one of the most safety conscious professions today. Flight trials can be divided into 3 sections, planning, execution and analysis and reporting.PreparationFor both commercial and military aircraft, flight test preparation begins well before the aircraft is ready to fly. Initially requirements for flight testing must be defined, from which the Flight Test Engineers prepare the test plan(s). These will include the aircraft

Page 26: AEROSPACE

configuration, data requirements and manoeuvres to be flown or systems to be exercised. A full certification/qualification flight test program for a new aircraft will require testing for many aircraft systems and in-flight regimes; each is typically documented in a separate test plan. During the actual flight testing, similar maneuvers from all test plans are combined and the data collected on the same flights, where practical. This allows the required data to be acquired in the minimum number of flight hours.Once the flight test data requirements are established, the aircraft is instrumented to record that data for analysis. Typical instrumentation parameters recorded during a flight test are: temperatures, pressures, structural loads, vibration/accelerations, noise levels (interior and exterior), aircraft performance parameters (airspeed, altitude, etc.), aircraft controls positions (stick/yoke position, rudder pedal position, throttle position, etc.), engine performance parameters, and atmospheric conditions. During selected phases of flight test, especially during early development of a new aircraft, many parameters are transmitted to the ground during the flight and monitored by the Flight Test Engineer and test support engineers. This provides for safety monitoring and allows real-time analysis of the data being acquired.ExecutionWhen the aircraft is completely assembled and instrumented, it typically conducts many hours of ground testing before its first/maiden flight. This ground testing will verify basic aircraft systems operations, measure engine performance, evaluate dynamic systems stability, and provide a first look at structural loads. Flight controls will also be checked out. Once all required ground tests are completed, the aircraft is ready for the first flight. First/maiden flight is a major milestone in any aircraft development program and is undertaken with the utmost caution.There are several aspects to a flight test program: handling qualities, performance, aero-elastic/flutter stability, avionics/systems capabilities, weapons delivery, and structural loads. Handling qualities evaluates the aircraft's controllability and response to pilot inputs throughout the range of flight. Performance testing evaluates aircraft in relation to its projected abilities, such as speed, range, power available, drag, airflow characteristics, and so forth. Aero-elastic stability evaluates the dynamic response of the aircraft controls and structure to aerodynamic (i.e. air-induced) loads. Structural testsmeasure the stresses on the airframe, dynamic components, and controls to verify structural integrity in all flight regimes. Avionics/systems testing verifies all electronic systems (navigation, communications, radars, sensors, etc.) perform as designed. Weapons delivery looks at the pilot’s ability to acquire the target using on-board systems and accurately deliver the ordnance on target. Weapons delivery testing also evaluates the separation of the ordnance as it leaves the aircraft to ensure there are no safety issues. Other military unique tests are: air-to-air refueling, radar/infrared signature measurement, and aircraft carrier operations. Emergency situations are evaluated as a normal part of all flight test program. Examples are: engine failure during various phases of flight (takeoff, cruise, landing), systems failures, and controls degradation. The overall operations envelope (allowable gross weights, centers-of-gravity, altitude, max/min airspeeds, maneuvers, etc.) is established and verified during

Page 27: AEROSPACE

flight testing. Aircraft are always demonstrated to be safe beyond the limits allowed for normal operations in the Flight Manual.Because the primary goal of a flight test program is to gather accurate engineering data, often on a design that is not fully proven, piloting a flight test aircraft requires a high degree of training and skill, so such programs are typically flown by a specially trained test pilot, and the data is gathered by a flight test engineer, and often visually displayed to the a test pilot and/or flight test engineer using flight test instrumentationAnalysis and ReportingFlight Test TeamThe make-up of the Flight Test Team will vary with the organization and complexity of the flight test program, however, there are some key players who are generally part of all flight test organizations. The leader of a flight test team is usually a Flight test engineer (FTE) or possibly an experimental Test Pilot. Other FTEs or pilots could also be involved. Other team members would be the Flight Test Instrumentation Engineer, Instrumentation System Technicians, the aircraft maintenance department (mechanics, electricials, avionics technicians, etc.), Quality/Product Assurance Inspectors, the ground-based computing/data center personnel, plus logistics and administrative support. Engineers from various other disciplines would support the testing of their particular systems and analyze the data acquired for their specialty area.Since many aircraft development programs are sponsored by government military services, military or government-employed civilian pilots and engineers are often integrated into the flight test team. The government representatives provide program oversight and review and approve data. Government test pilots may also participate in the actual test flights, possibly even on the first/maiden flight.********************************************************************

Fluid mechanics Fluid mechanicsFluid mechanics is the study of how fluids move and the forces on them. (Fluids include liquids and gases.) Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms. The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes made a beginning on fluid statics. However, fluid mechanics, especially fluid dynamics, is an active field of research with many unsolved or partly solved problems. Fluid mechanics can be mathematically complex. Sometimes it can best be solved by numerical methods, typically using computers. A modern discipline, called Computational Fluid Dynamics (CFD), is devoted to this approach to solving fluid mechanics problems. Also taking advantage of the highly visual nature of fluid flow is Particle Image Velocimetry, an experimental method for visualizing and analyzing fluid flow.Relationship to continuum mechanicsFluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table.

Page 28: AEROSPACE

Continuum mechanics the study of the physics of continuous materials

Solid mechanics: the study of the physics of continuous materials with a defined rest shape.

Elasticity: which describes materials that return to their rest shape after an applied stress.

Plasticity: which describes materials that permanently deform after a large enough applied stress.

Rheology: the study of materials with both solid and fluid characteristics

Fluid mechanics: the study of the physics of continuous materials which take the shape of their container.

Non-Newtonian fluids

Newtonian fluids

In a mechanical view, a fluid is a substance that does not support tangential stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress.

Assumptions

Like any mathematical model of the real world, fluid mechanics makes some basic assumptions about the materials being studied. These assumptions are turned into equations that must be satisfied if the assumptions are to hold true. For example, consider an incompressible fluid in three dimensions. The assumption that mass is conserved means that for any fixed closed surface (such as a sphere) the rate of mass

passing from outside to inside the surface must be the same as rate of mass passing the other way. (Alternatively, the mass inside remains constant, as does the mass outside). This can be turned into an integral equation over the surface.

Fluid mechanics assumes that every fluid obeys the following:

Conservation of mass Conservation of momentum The continuum hypothesis, detailed below.

Further, it is often useful (and realistic) to assume a fluid is incompressible - that is, the density of the fluid does not change. Liquids can often be modelled as incompressible fluids, whereas gases cannot.

Similarly, it can sometimes be assumed that the viscosity of the fluid is zero (the fluid is inviscid). Gases can often be assumed to be inviscid. If a fluid is viscous, and its flow contained in some way (e.g. in a pipe), then the flow at the boundary must have zero

Page 29: AEROSPACE

velocity. For a viscous fluid, if the boundary is not porous, the shear forces between the fluid and the boundary results also in a zero velocity for the fluid at the boundary. This is called the no-slip condition. For a porous media otherwise, in the frontier of the containing vessel, the slip condition is not zero velocity, and the fluid has a discontinuous velocity field between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition).

The continuum hypothesis

Fluids are composed of molecules that collide with one another and solid objects. The continuum assumption, however, considers fluids to be continuous. That is, properties such as density, pressure, temperature, and velocity are taken to be well-defined at "infinitely" small points, defining a REV (Reference Element of Volume), at the geometric order of the distance between two adjacent molecules of fluid. Properties are assumed to vary continuously from one point to another, and are averaged values in the REV. The fact that the fluid is made up of discrete molecules is ignored.

The continuum hypothesis is basically an approximation, in the same way planets are approximated by point particles when dealing with celestial mechanics, and therefore results in approximate solutions. Consequently, assumption of the continuum hypothesis can lead to results which are not of desired accuracy. That said, under the right circumstances, the continuum hypothesis produces extremely accurate results.

Those problems for which the continuum hypothesis does not allow solutions of desired accuracy are solved using statistical mechanics. To determine whether or not to use conventional fluid dynamics or statistical mechanics, the Knudsen number is evaluated for the problem. The Knudsen number is defined as the ratio of the molecular mean free path length to a certain representative physical length scale. This length scale could be, for example, the radius of a body in a fluid. (More simply, the Knudsen number is how many times its own diameter a particle will travel on average before hitting another particle). Problems with Knudsen numbers at or above unity are best evaluated using statistical mechanics for reliable solutions.

Navier-Stokes equations

The Navier-Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are the set of equations that describe the motion of fluid substances such as liquids and gases. These equations state that changes in momentum (acceleration) of fluid particles depend only on the external pressure and internal viscous forces (similar to friction) acting on the fluid. Thus, the Navier-Stokes equations describe the balance of forces acting at any given region of the fluid.

The Navier-Stokes equations are differential equations which describe the motion of a fluid. Such equations establish relations among the rates of change the variables of

Page 30: AEROSPACE

interest. For example, the Navier-Stokes equations for an ideal fluid with zero viscosity states that acceleration (the rate of change of velocity) is proportional to the derivative of internal pressure.

This means that solutions of the Navier-Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow (flow does not change with time) in which the Reynolds number is small.

For more complex situations, such as global weather systems like El Niño or lift in a wing, solutions of the Navier-Stokes equations can currently only be found with the help of computers. This is a field of sciences by its own called computational fluid dynamics.

General form of the equation

The general form of the Navier-Stokes equations for the conservation of momentum is:

where

is the fluid density,

is the substantive derivative (also called the material derivative), is the velocity vector, is the body force vector, and is a tensor that represents the surface forces applied on a fluid particle (the

comoving stress tensor).

Unless the fluid is made up of spinning degrees of freedom like vortices, is a symmetric tensor. In general, (in three dimensions) has the form:

where

are normal stresses, and

are tangential stresses (shear stresses).

Page 31: AEROSPACE

The above is actually a set of three equations, one per dimension. By themselves, these aren't sufficient to produce a solution. However, adding conservation of mass and appropriate boundary conditions to the system of equations produces a solvable set of equations.

Newtonian vs. non-Newtonian fluids

A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved through the fluid is proportional to the force applied to the object. (Compare friction).

By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time - this behaviour is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property.

Equations for a Newtonian fluid

The constant of proportionality between the shear stress and the velocity gradient is known as the viscosity. A simple equation to describe Newtonian fluid behaviour is

where

τ is the shear stress exerted by the fluid ("drag")μ is the fluid viscosity - a constant of proportionality

is the velocity gradient perpendicular to the direction of shear

For a Newtonian fluid, the viscosity, by definition, depends only on temperature and pressure, not on the forces acting upon it. If the fluid is incompressible and viscosity is constant across the fluid, the equation governing the shear stress (in Cartesian coordinates) is

Page 32: AEROSPACE

where

τij is the shear stress on the ith face of a fluid element in the jth directionvi is the velocity in the ith directionxj is the jth direction coordinate

If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types.

************************************************************************

Materials Science Materials science or materials engineering is an interdisciplinary field involving the properties of matter and its applications to various areas of science and engineering. This science investigates the relationship between the structure of materials and their properties. It includes elements of applied physics and chemistry, as well as chemical, mechanical, civil and electrical engineering. With significant media attention to nanoscience and nanotechnology in recent years, materials science has been propelled to the forefront at many universities. It is also an important part of forensic engineering and forensic materials engineering, the study of failed products and components.HistoryThe material of choice of a given era is often its defining point; the Stone Age, Bronze Age, and Steel Age are examples of this. Materials science is one of the oldest forms of engineering and applied science, deriving from the manufacture of ceramics. Modern materials science evolved directly from metallurgy, which itself evolved from mining. A major breakthrough in the understanding of materials occurred in the late 19th century, when Willard Gibbs demonstrated that thermodynamic properties relating to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science are a product of the space race: the understanding and engineering of the metallic alloys, and silica and carbon materials, used in the construction of space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as plastics, semiconductors, and biomaterials.Before the 1960s (and in some cases decades after), many materials science departments were named metallurgy departments, from a 19th and early 20th century emphasis on metals. The field has since broadened to include every class of materials, including: ceramics, polymers, semiconductors, magnetic materials, medical implant materials and biological materials.Fundamentals of materials science

Page 33: AEROSPACE

In materials science, rather than haphazardly looking for and discovering materials and exploiting their properties, one instead aims to understand materials fundamentally so that new materials with the desired properties can be created.The basis of all materials science involves relating the desired properties and relative performance of a material in a certain application to the structure of the atoms and phases in that material through characterization. The major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These, taken together and related through the laws of thermodynamics, govern a material’s microstructure, and thus its properties.An old adage in materials science says: "materials are like people; it is the defects that make them interesting". The manufacture of a perfect crystal of a material is currently physically impossible. Instead materials scientists manipulate the defects in crystalline materials such as precipitates, grain boundaries (Hall-Petch relationship), interstitial atoms, vacancies or substitutional atoms, to create materials with the desired properties.Not all materials have a regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glasses, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic, as well as mechanical, descriptions of physical properties.In addition to industrial interest, materials science has gradually developed into a field which provides tests for condensed matter or solid state theories. New physics emerge because of the diverse new material properties which need to be explained.Materials in industryRadical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing techniques (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytical techniques (characterization techniques such as electron microscopy, x-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, etc.).Besides material characterisation, the material scientist/engineer also deals with the extraction of materials and their conversion into useful forms. Thus ingot casting, foundry techniques, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a metallurgist/engineer. Often the presence, absence or variation of minute quantities of secondary elements and compounds in a bulk material will have a great impact on the final properties of the materials produced, for instance, steels are classified based on 1/10th and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extraction and purification techniques

Page 34: AEROSPACE

employed in the extraction of iron in the blast furnace will have an impact of the quality of steel that may be produced.The overlap between physics and materials science has led to the offshoot field of materials physics, which is concerned with the physical properties of materials. The approach is generally more macroscopic and applied than in condensed matter physics. See important publications in materials physics for more details on this field of study.The study of metal alloys is a significant part of materials science. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. For the steels, the hardness and tensile strength of the steel is directly related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. The addition of silicon and graphitization will produce cast irons (although some cast irons are made precisely with no graphitization). The addition of chromium, nickel and molybdenum to carbon steels (more than 10%) gives us stainless steels.Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength-to-weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.Other than metals, polymers and ceramics are also an important part of materials science. Polymers are the raw materials (the resins) used to make what we commonly call plastics. Plastics are really the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Polymers which have been around, and which are in current widespread use, include polyethylene, polypropylene, PVC, polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Plastics are generally classified as "commodity", "specialty" and "engineering" plastics.PVC (polyvinyl-chloride) is widely used, inexpensive, and annual production quantities are large. It lends itself to an incredible array of applications, from artificial leather to electrical insulation and cabling, packaging and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Engineering plastics are valued for their superior strengths and

Page 35: AEROSPACE

other special material properties. They are usually not used for disposable applications, unlike commodity plastics.Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.It should be noted here that the dividing line between the various types of plastics is not based on material but rather on their properties and applications. For instance, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable shopping bags and trash bags, and is considered a commodity plastic, whereas Medium-Density Polyethylene MDPE is used for underground gas and water pipes, and another variety called Ultra-high Molecular Weight Polyethylene UHMWPE is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.Another application of material science in industry is the making of composite materials. Composite materials are structured materials composed of two or more macroscopic phases. An example would be steel-reinforced concrete; another can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile-butadiene-styrene (ABS) in which calcium carbonate chalk, talc, glass fibres or carbon fibres have been added for added strength, bulk, or electro-static dispersion. These additions may be referred to as reinforcing fibres, or dispersants, depending on their purpose.Classes of materials (by bond types)Materials science encompasses various classes of materials, each of which may constitute a separate field. Materials are sometimes classified by the type of bonding present between the atoms:

1. Ionic crystals2. Covalent crystals3. Metals4. Intermetallics5. Semiconductors6. Polymers7. Composite materials8. Vitreous materials

Sub-fields of materials science

Nanotechnology – rigorously, the study of materials where the effects of quantum confinement, the Gibbs-Thomson effect, or any other effect only present at the nanoscale is the defining property of the material; but more commonly, it is the creation and study of materials whose defining structural properties are anywhere from less than a nanometer to one hundred nanometers in scale, such as molecularly engineered materials.

Page 36: AEROSPACE

Microtechnology - study of materials and processes and their interaction, allowing microfabrication of structures of micrometric dimensions, such as MicroElectroMechanical Systems (MEMS).

Crystallography – the study of how atoms in a solid fill space, the defects associated with crystal structures such as grain boundaries and dislocations, and the characterization of these structures and their relation to physical properties.

Materials Characterization – such as diffraction with x-rays, electrons, or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy (EDS), chromatography, thermal analysis, electron microscope analysis, etc., in order to understand and define the properties of materials. See also List of surface analysis methods

Metallurgy – the study of metals and their alloys, including their extraction, microstructure and processing.

Biomaterials – materials that are derived from and/or used with biological systems.

Electronic and magnetic materials – materials such as semiconductors used to create integrated circuits, storage media, sensors, and other devices.

Tribology – the study of the wear of materials due to friction and other factors. Surface science/Catalysis – interactions and structures between solid-gas solid-

liquid or solid-solid interfaces. Ceramography – the study of the microstructures of high-temperature materials

and refractories, including structural ceramics such as RCC, polycrystalline silicon carbide and transformation toughened ceramics

Some practitioners often consider rheology a sub-field of materials science, because it can cover any material that flows. However, modern rheology typically deals with non-Newtonian fluid dynamics, so it is often considered a sub-field of continuum mechanics. See also granular material.

Glass Science – any non-crystalline material including inorganic glasses, vitreous metals and non-oxide glasses.

Forensic engineering – the study of how products fail, and the vital role of the materials of construction

Forensic materials engineering – the study of material failure, and the light it sheds on how engineers specify materials in their product

Topics that form the basis of materials science

Thermodynamics, statistical mechanics, kinetics and physical chemistry, for phase stability, transformations (physical and chemical) and diagrams.

Crystallography and chemical bonding, for understanding how atoms in a material are arranged.

Mechanics, to understand the mechanical properties of materials and their structural applications.

Page 37: AEROSPACE

Solid-state physics and quantum mechanics, for the understanding of the electronic, thermal, magnetic, chemical, structural and optical properties of materials.

Diffraction and wave mechanics, for the characterization of materials. Chemistry and polymer science, for the understanding of plastics, colloids,

ceramics, liquid crystals, solid state chemistry, and polymers. Biology, for the integration of materials into biological systems. Continuum mechanics and statistics, for the study of fluid flows and ensemble

systems. Mechanics of materials, for the study of the relation between the mechanical

behavior of materials and their microstructures.

Important Journals

Chemistry of Materials Nature Materials Acta Materialia JOM Advanced Materials Computational materials science Advanced Functional Materials Journal of Materials Chemistry Journal of Materials Online - Open Access Metallurgical and Materials Transactions Journal of Materials Research Journal of Materials Science Federation of European Materials Science Societies Newsletter AMMTIAC eNews/Quarterly Advanced materials, manufacturing, and testing.

(Free subscription) *****************************************************************

Mathematics Mathematics is the body of knowledge centered on such concepts as quantity, structure, space, and change, and also the academic discipline that studies them. Benjamin Peirce called it "the science that draws necessary conclusions". Other practitioners of mathematics maintain that mathematics is the science of pattern, and that mathematicians seek out patterns whether found in numbers, space, science, computers, imaginary abstractions, or elsewhere. Mathematicians explore such concepts, aiming to formulate new conjectures and establish their truth by rigorous deduction from appropriately chosen axioms and definitions.Through the use of abstraction and logical reasoning, mathematics evolved from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Knowledge and use of basic mathematics have always been

Page 38: AEROSPACE

an inherent and integral part of individual and group life. Refinements of the basic ideas are visible in mathematical texts originating in the ancient Egyptian, Mesopotamian, Indian, Chinese, Greek and Islamic worlds. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. The development continued in fitful bursts until the Renaissance period of the 16th century, when mathematical innovations interacted with new scientific discoveries, leading to an acceleration in research that continues to the present day.Today, mathematics is used throughout the world in many fields, including natural science, engineering, medicine, and the social sciences such as economics. Applied mathematics, the application of mathematics to such fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new disciplines. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind, although applications for what began as pure mathematics are often discovered later.EtymologyThe word "mathematics" (Greek: μαθηματικά or mathēmatiká) comes from the Greek μάθημα (máthēma), which means learning, study, science, and additionally came to have the narrower and more technical meaning "mathematical study", even in Classical times. Its adjective is μαθηματικός (mathēmatikós), related to learning, or studious, which likewise further came to mean mathematical. In particular, μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), in Latin ars mathematica, meant the mathematical art.The apparent plural form in English, like the French plural form les mathématiques (and the less commonly used singular derivative la mathématique), goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural τα μαθηματικά (ta mathēmatiká), used by Aristotle, and meaning roughly "all things mathematical". In English, however, the noun mathematics takes singular verb forms. It is often shortened to math in English-speaking North America and maths elsewhere.HistoryThe evolution of mathematics might be seen as an ever-increasing series of abstractions, or alternatively an expansion of subject matter. The first abstraction was probably that of numbers. The realization that two apples and two oranges have something in common was a breakthrough in human thought. In addition to recognizinghow to count physical objects, prehistoric peoples also recognized how to count abstract quantities, like time — days, seasons, years. Arithmetic (addition, subtraction, multiplication and division), naturally followed.Further steps need writing or some other system for recording numbers such as tallies or the knotted strings called quipu used by the Inca to store numerical data. Numeral systems have been many and diverse, with the first known written numerals created by Egyptians in Middle Kingdom texts such as the Rhind Mathematical Papyrus. The Indus Valley civilization developed the modern decimal system, including the concept of zero.From the beginnings of recorded history, the major disciplines within mathematics arose out of the need to do calculations relating to taxation and commerce, to understand the relationships among numbers, to measure land, and to predict astronomical events.

Page 39: AEROSPACE

These needs can be roughly related to the broad subdivision of mathematics into the studies of quantity, structure, space, and change.Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries have been made throughout history and continue to be made today. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."Inspiration, pure and applied mathematics, and aestheticsMathematics arises wherever there are difficult problems that involve quantity, structure, space, or change. At first these were found in commerce, land measurement and later astronomy; nowadays, all sciences suggest problems studied by mathematicians, and many problems arise within mathematics itself. For example, Richard Feynman invented the Feynman path integral using a combination of mathematical reasoning and physical insight, and today's string theory continues to inspire new mathematics. Some mathematics is only relevant in the area that inspired it, and is applied to solve further problems in that area. But often mathematics inspired by one area proves useful in many areas, and joins the general stock of mathematical concepts. The remarkable fact that even the "purest" mathematics often turns out to have practical applications is what Eugene Wigner has called "the unreasonable effectiveness of mathematics."As in most areas of study, the explosion of knowledge in the scientific age has led to specialization in mathematics. One major distinction is between pure mathematics and applied mathematics. Several areas of applied mathematics have merged with related traditions outside of mathematics and become disciplines in their own right, including statistics, operations research, and computer science.For those who are mathematically inclined, there is often a definite aesthetic aspect to much of mathematics. Many mathematicians talk about the elegance of mathematics, its intrinsic aesthetics and inner beauty. Simplicity and generality are valued. There is beauty in a simple and elegant proof, such as Euclid's proof that there are infinitelymany prime numbers, and in an elegant numerical method that speeds calculation, such as the fast Fourier transform. G. H. Hardy in A Mathematician's Apology expressed the belief that these aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. Mathematicians often strive to find proofs of theorems that are particularly elegant, a quest Paul Erdős often referred to as finding proofs from "The Book" in which God had written down his favorite proofs. The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.

Notation, language, and rigor

Most of the mathematical notation in use today was not invented until the 16th

Page 40: AEROSPACE

century. Before that, mathematics was written out in words, a painstaking process that limited mathematical discovery. In the 18th century, Euler was responsible for many of the notations in use today. Modern notation makes mathematics much easier for the professional, but beginners often find it daunting. It is extremely compressed: a few symbols contain a great deal of information. Like musical notation, modern mathematical notation has a strict syntax and encodes information that would be difficult to write in any other way.

Mathematical language also is hard for beginners. Words such as or and only have more precise meanings than in everyday speech. Also confusing to beginners, words such as open and field have been given specialized mathematical meanings. Mathematical jargon includes technical terms such as homeomorphism and integrable. But there is a reason for special notation and technical jargon: mathematics requires more precision than everyday speech. Mathematicians refer to this precision of language and logic as "rigor".

Rigor is fundamentally a matter of mathematical proof. Mathematicians want their theorems to follow from axioms by means of systematic reasoning. This is to avoid mistaken "theorems", based on fallible intuitions, of which many instances have occurred in the history of the subject. The level of rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but at the time of Isaac Newton the methods employed were less rigorous. Problems inherent in the definitions used by Newton would lead to a resurgence of careful analysis and formal proof in the 19th century. Today, mathematicians continue to argue among themselves about computer-assisted proofs. Since large computations are hard to verify, such proofs may not be sufficiently rigorous. Axioms in traditional thought were "self-evident truths", but that conception is problematic. At a formal level, an axiom is just a string of symbols, which has an intrinsic meaning only in the context of all derivable formulas of an axiomatic system. It was the goal of Hilbert's program to put all of mathematics on a firm axiomatic basis, but according to Gödel's incompleteness theorem every (sufficiently powerful) axiomatic system has undecidable formulas; and so a final axiomatization of mathematics is impossible. Nonetheless mathematics is often imagined to be (as far as its formal content) nothing but set theory in some axiomatization, in the sense that every mathematical statement or proof could be cast into formulas within set theory.

Mathematics as science

Carl Friedrich Gauss referred to mathematics as "the Queen of the Sciences". In the original Latin Regina Scientiarum, as well as in German Königin der Wissenschaften, the word corresponding to science means (field of) knowledge. Indeed, this is also the original meaning in English, and there is no doubt that mathematics is in this

Page 41: AEROSPACE

sense a science. The specialization restricting the meaning to natural science is of later date. If one considers science to be strictly about the physical world, then mathematics, or at least pure mathematics, is not a science. Albert Einstein has stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."

Many philosophers believe that mathematics is not experimentally falsifiable,[citation

needed] and thus not a science according to the definition of Karl Popper. However, in the 1930s important work in mathematical logic showed that mathematics cannot be reduced to logic, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently." Other thinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.

An alternative view is that certain scientific fields (such as theoretical physics) are mathematics with axioms that are intended to correspond to reality. In fact, the theoretical physicist, J. M. Ziman, proposed that science is public knowledge and thus includes mathematics. In any case, mathematics shares much in common with many fields in the physical sciences, notably the exploration of the logical consequences of assumptions. Intuition and experimentation also play a role in the formulation of conjectures in both mathematics and the (other) sciences. Experimental mathematics continues to grow in importance within mathematics, and computation and simulation are playing an increasing role in both the sciences and mathematics, weakening the objection that mathematics does not use the scientific method. In his 2002 book A New Kind of Science, Stephen Wolfram argues that computational mathematics deserves to be explored empirically as a scientific field in its own right.

The opinions of mathematicians on this matter are varied. Many mathematicians feel that to call their area a science is to downplay the importance of its aesthetic side, and its history in the traditional seven liberal arts; others feel that to ignore its connection to the sciences is to turn a blind eye to the fact that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is created (as in art) or discovered (as in science). It is common to see universities divided into sections that include a division of Science and Mathematics, indicating that the fields are seen as being allied but that they do not coincide. In practice, mathematicians are typically grouped with scientists at the gross level but separated at finer levels. This is one of many issues considered in the philosophy of mathematics.

Mathematical awards are generally kept separate from their equivalents in science.

Page 42: AEROSPACE

The most prestigious award in mathematics is the Fields Medal, established in 1936 and now awarded every 4 years. It is often considered, misleadingly, the equivalent of science's Nobel Prizes. The Wolf Prize in Mathematics, instituted in 1979, recognizes lifetime achievement, and another major international award, the Abel Prize, was introduced in 2003. These are awarded for a particular body of work, which may be innovation, or resolution of an outstanding problem in an established field. A famous list of 23 such open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least nine of the problems have now been solved. A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. Solution of each of these problems carries a $1 million reward, and only one (the Riemann hypothesis) is duplicated in Hilbert's problems.

Fields of mathematics

As noted above, the major disciplines within mathematics first arose out of the need to do calculations in commerce, to understand the relationships between numbers, to measure land, and to predict astronomical events. These four needs can be roughly related to the broad subdivision of mathematics into the study of quantity, structure, space, and change (i.e., arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also subdivisions dedicated to exploring links from the heart of mathematics to other fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), and more recently to the rigorous study of uncertainty.

Quantity

The study of quantity starts with numbers, first the familiar natural numbers and integers ("whole numbers") and arithmetical operations on them, which are characterized in arithmetic. The deeper properties of integers are studied in number theory, whence such popular results as Fermat's last theorem. Number theory also holds two widely-considered unsolved problems: the twin prime conjecture and Goldbach's conjecture.

As the number system is further developed, the integers are recognized as a subset of the rational numbers ("fractions"). These, in turn, are contained within the real numbers, which are used to represent continuous quantities. Real numbers are generalized to complex numbers. These are the first steps of a hierarchy of numbers that goes on to include quarternions and octonions. Consideration of the natural numbers also leads to the transfinite numbers, which formalize the concept of counting to infinity. Another area of study is size, which leads to the cardinal numbers

Page 43: AEROSPACE

and then to another conception of infinity: the aleph numbers, which allow meaningful comparison of the size of infinitely large sets.

Natural numbers

Integers Rational numbers

Real numbers Complex numbers

<<Previous Next>>

Structure

Many mathematical objects, such as sets of numbers and functions, exhibit internal structure. The structural properties of these objects are investigated in the study of groups, rings, fields and other abstract systems, which are themselves such objects. This is the field of abstract algebra. An important concept here is that of vectors, generalized to vector spaces, and studied in linear algebra. The study of vectors combines three of the fundamental areas of mathematics: quantity, structure, and space. Vector calculus expands the field into a fourth fundamental area, that of change.

Number theory Abstract algebra Group theory Order theory

Space

The study of space originates with geometry - in particular, Euclidean geometry. Trigonometry combines space and numbers, and encompasses the well-known Pythagorean theorem. The modern study of space generalizes these ideas to include higher-dimensional geometry, non-Euclidean geometries (which play a central role in general relativity) and topology. Quantity and space both play a role in analytic geometry, differential geometry, and algebraic geometry. Within differential geometry are the concepts of fiber bundles and calculus on manifolds. Within algebraic geometry is the description of geometric objects as solution sets of polynomial equations, combining the concepts of quantity and space, and also the study of topological groups, which combine structure and space. Lie groups are used to study space, structure, and change. Topology in all its many ramifications may have been the greatest growth area in 20th century mathematics, and includes the long-standing Poincaré conjecture and the

Page 44: AEROSPACE

controversial four color theorem, whose only proof, by computer, has never been verified by a human.

Geometry Trigonometry Differential geometry

Topology Fractal geometry

Change

Understanding and describing change is a common theme in the natural sciences, and calculus was developed as a powerful tool to investigate it. Functions arise here, as a central concept describing a changing quantity. The rigorous study of real numbers and real-valued functions is known as real analysis, with complex analysis the equivalent field for the complex numbers. The Riemann hypothesis, one of the most fundamental open questions in mathematics, is drawn from complex analysis. Functional analysis focuses attention on (typically infinite-dimensional) spaces of functions. One of many applications of functional analysis is quantum mechanics. Many problems lead naturally to relationships between a quantity and its rate of change, and these are studied as differential equations. Many phenomena in nature can be described by dynamical systems; chaos theory makes precise the ways in which many of these systems exhibit unpredictable yet still deterministic behavior.

Calculus Vector calculus Differential equations

Dynamical systems

Chaos theory

Foundations and philosophy

In order to clarify the foundations of mathematics, the fields of mathematical logic and set theory were developed, as well as category theory which is still in development.

Mathematical logic is concerned with setting mathematics on a rigid axiomatic framework, and studying the results of such a framework. As such, it is home to Gödel's second incompleteness theorem, perhaps the most widely celebrated result in logic, which (informally) implies that any formal system that contains basic arithmetic, if sound (meaning that all theorems that can be proven are true), is necessarily incomplete

Page 45: AEROSPACE

(meaning that there are true theorems which cannot be proved in that system). Gödel showed how to construct, whatever the given collection of number-theoretical axioms, a formal statement in the logic that is a true number-theoretical fact, but which does not follow from those axioms. Therefore no formal system is a true axiomatization of full number theory. Modern logic is divided into recursion theory, model theory, and proof theory, and is closely linked to theoretical computer science.

Mathematical logic Set theory Category theory

Discrete mathematics

Discrete mathematics is the common name for the fields of mathematics most generally useful in theoretical computer science. This includes computability theory, computational complexity theory, and information theory. Computability theory examines the limitations of various theoretical models of the computer, including the

most powerful known model - the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with rapid advance of computer hardware. Finally, information theory is concerned with the amount of data that can be stored on a given medium, and hence concepts such as compression and entropy.

As a relatively new field, discrete mathematics has a number of fundamental open problems. The most famous of these is the "P=NP?" problem, one of the Millennium Prize Problems.

Combinatorics Theory of computation Cryptography Graph theory

Applied mathematics

Applied mathematics considers the use of abstract mathematical tools in solving concrete problems in the sciences, business, and other areas. An important field in applied mathematics is statistics, which uses probability theory as a tool and allows the description, analysis, and prediction of phenomena where chance plays a role. Most experiments, surveys and observational studies require the informed use of statistics.

Page 46: AEROSPACE

(Many statisticians, however, do not consider themselves to be mathematicians, but rather part of an allied group.) Numerical analysis investigates computational methods for efficiently solving a broad range of mathematical problems that are typically too large for human numerical capacity; it includes the study of rounding errors or other sources of error in computation.

Mathematical physics

Mathematical fluid dynamics Numerical analysis

Optimization

ProbabilityStatistics

Financial mathematics Game theory

Common misconceptions

Mathematics is not a closed intellectual system, in which everything has already been worked out. There is no shortage of open problems. Mathematicians publish many thousands of papers embodying new discoveries in mathematics every month.

Mathematics is not numerology, nor is it accountancy; nor is it restricted to arithmetic.

Pseudomathematics is a form of mathematics-like activity undertaken outside academia, and occasionally by mathematicians themselves. It often consists of determined attacks on famous questions, consisting of proof-attempts made in an isolated way (that is, long papers not supported by previously published theory). The relationship to generally-accepted mathematics is similar to that between pseudoscience and real science. The misconceptions involved are normally based on:

misunderstanding of the implications of mathematical rigor; attempts to circumvent the usual criteria for publication of mathematical papers

in a learned journal after peer review, often in the belief that the journal is biased against the author;

lack of familiarity with, and therefore underestimation of, the existing literature.

Page 47: AEROSPACE

The case of Kurt Heegner's work shows that the mathematical establishment is neither infallible, nor unwilling to admit error in assessing 'amateur' work. And like astronomy, mathematics owes much to amateur contributors such as Fermat and Mersenne.

Mathematics and physical reality

Mathematical concepts and theorems need not correspond to anything in the physical world. Insofar as a correspondence does exist, while mathematicians and physicists may select axioms and postulates that seem reasonable and intuitive, it is not necessary for the basic assumptions within an axiomatic system to be true in an empirical or physical sense. Thus, while many axiom systems are derived from our perceptions and experiments, they are not dependent on them.

For example, we could say that the physical concept of two apples may be accurately modeled by the natural number 2. On the other hand, we could also say that the natural numbers are not an accurate model because there is no standard "unit" apple and no two apples are exactly alike. The modeling idea is further complicated by the possibility of fractional or partial apples. So while it may be instructive to visualize the axiomatic definition of the natural numbers as collections of apples, the definition itself is not dependent upon nor derived from any actual physical entities.

Nevertheless, mathematics remains extremely useful for solving real-world problems. This fact led physicist Eugene Wigner to write an article titled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".

*****************************************************************

Propulsion Propulsion

Jet engine

Rocket

Spacecraft propulsion

Electric propulsion

Jet Engine A jet engine is a reaction engine that discharges a fast moving jet of fluid to generate thrust in accordance with Newton's third law of motion. This broad definition of jet engines includes turbojets, turbofans, rockets, ramjets, pulse jets and pump-jets. In

Page 48: AEROSPACE

general, most jet engines are internal combustion engines but non-combusting forms also exist.In common usage, the term 'jet engine' generally refers to a gas turbine driven internal combustion engine, an engine with a rotary compressor powered by a turbine ("Brayton cycle"), with the leftover power providing thrust. These types of jet engines are primarily used by jet aircraft for long distance travel. The early jet aircraft used turbojet engines which were relatively inefficient for subsonic flight. Modern jet aircraft usually use high-bypass turbofan engines which help give high speeds as well as, over long distances, giving better fuel efficiency than many other forms of transport.About 7.2% of the world's oil was ultimately consumed by jet engines in 2004 In 2007, the cost of jet fuel, while highly variable from one airline to another, averaged 26.5% of total operating costs, making it the single largest operating expense for most airlines.

HistoryJet engines can be dated back to the first century AD, when Hero of Alexandria invented the aeolipile. This used steam power directed through two jet nozzles so as to cause a sphere to spin rapidly on its axis. So far as is known, it was little used for supplying mechanical power, and the potential practical applications of Hero's invention of the jet engine were not recognized. It was simply considered a curiosity.Jet propulsion only literally and figuratively took off with the invention of the rocket by the Chinese in the 11th century. Rocket exhaust was initially used in a modest way for fireworks but gradually progressed to propel formidable weaponry; and there the technology stalled for hundreds of years.In Ottoman Turkey in 1633 Lagari Hasan Çelebi took off with what was described to be a cone shaped rocket and then glided with wings into a successful landing winning a position in the Ottoman army. However, this was essentially a stunt.

The problem was that rockets are simply too inefficient at low speeds to be useful for general aviation. Instead, by the 1930s, the piston engine in its many different forms (rotary and static radial, aircooled and liquid-cooled inline) was the only type of powerplant available to aircraft designers. This was acceptable as long as only low performance aircraft were required, and indeed all that were available.

However, engineers were beginning to realize that the piston engine was self-limiting in terms of the maximum performance which could be attained; the limit was essentially one of propeller efficiency. This seemed to peak as blade tips approached the speed of sound. If engine, and thus aircraft, performance were ever to increase beyond such a barrier, a way would have to be found to radically improve the design of the piston engine, or a wholly new type of powerplant would have to be developed. This was the motivation behind the development of the gas turbine engine, commonly called a "jet" engine, which would become almost as revolutionary to aviation as the Wright brothers' first flight.

The earliest attempts at jet engines were hybrid designs in which an external power source first compressed air, which was then mixed with fuel and burned for jet thrust. In

Page 49: AEROSPACE

one such system, called a thermojet by Secondo Campini but more commonly, motorjet, the air was compressed by a fan driven by a conventional piston engine. Examples of this type of design were Henri Coandă's Coandă-1910 aircraft, and the much later Campini Caproni CC.2, and the Japanese Tsu-11 engine intended to power Ohka kamikaze planes towards the end of World War II. None were entirely successful and the CC.2 ended up being slower than the same design with a traditional engine and propeller combination.

The key to a practical jet engine was the gas turbine, used to extract energy from the engine itself to drive the compressor. The gas turbine was not an idea developed in the 1930s: the patent for a stationary turbine was granted to John Barber in England in 1791. The first gas turbine to successfully run self-sustaining was built in 1903 by Norwegian engineer Ægidius Elling. The first patents for jet propulsion were issued in 1917. Limitations in design and practical engineering and metallurgy prevented such engines reaching manufacture. The main problems were safety, reliability, weight and, especially, sustained operation. In 1923, Edgar Buckingham of the US National Bureau of Standard published a report expressing scepticism that jet engines would be economically competitive with prop driven aircraft at low altitude and the airspeeds of the period: "there does not appear to be, at present, any prospect whatever that jet propulsion of the sort here considered will ever be of practical value, even for military purposes."

In 1928, RAF College Cranwell cadet Frank Whittle formally submitted his ideas for a turbo-jet to his superiors. In October 1929 he developed his ideas further. . On 16 January 1930 in England, Whittle submitted his first patent (granted in 1932). The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A.Griffith

In 1935 Hans von Ohain started work on a similar design in Germany, apparently unaware of Whittle's work. His first engine was strictly experimental and could only run under external power, but he was able to demonstrate the basic concept. Ohain was then introduced to Ernst Heinkel, one of the larger aircraft industrialists of the day, who immediately saw the promise of the design. Heinkel had recently purchased the Hirth engine company, and Ohain and his master machinist Max Hahn were set up there as a new division of the Hirth company. They had their first HeS 1 centrifugal engine running by September 1937. Unlike Whittle's design, Ohain used hydrogen as fuel, supplied under external pressure. Their subsequent designs culminated in the gasoline-fuelled HeS 3 of 1,100 lbf (5 kN), which was fitted to Heinkel's simple and compact He 178 airframe and flown by Erich Warsitz in the early morning of August 27, 1939, from Marienehe aerodrome, an impressively short time for development. The He 178 was the world's first jet plane.

Meanwhile, Whittle's engine was starting to look useful, and his Power Jets Ltd. started receiving Air Ministry money. In 1941 a flyable version of the engine called the W.1,

Page 50: AEROSPACE

capable of 1000 lbf (4 kN) of thrust, was fitted to the Gloster E28/39 airframe specially built for it, and first flew on May 15, 1941 at RAF Cranwell

A British aircraft engine designer, Frank Halford, working from Whittle's ideas developed a "straight through" version of the centrifugal jet; his design became the de Havilland Goblin.

One problem with both of these early designs, which are called centrifugal-flow engines, was that the compressor worked by "throwing" (accelerating) air outward from the central intake to the outer periphery of the engine, where the air was then compressed by a divergent duct setup, converting its velocity into pressure. An advantage of this design was that it was already well understood, having been implemented in centrifugal superchargers

Austrian Anselm Franz of Junkers' engine division (Junkers Motoren or Jumo) addressed these problems with the introduction of the axial-flow compressor. Essentially, this is a turbine in reverse. Air coming in the front of the engine is blown towards the rear of the engine by a fan stage (convergent ducts), where it is crushed against a set of non-rotating blades called stators (divergent ducts). The process is nowhere near as powerful as the centrifugal compressor, so a number of these pairs of fans and stators are placed in series to get the needed compression. Even with all the added complexity, the resulting engine is much smaller in diameter and thus, more aerodynamic. Jumo was assigned the next engine number in the RLM numbering sequence, 4, and the result was the Jumo 004 engine. After many lesser technical difficulties were solved, mass production of this engine started in 1944 as a powerplant for the world's first jet-fighter aircraft, the Messerschmitt Me 262 (and later the world's first jet-bomber aircraft, the Arado Ar 234). A variety of reasons conspired to delay the engine's availability, this delay caused the fighter to arrive too late to decisively impact Germany's position in World War II. Nonetheless, it will be remembered as the first use of jet engines in service.

In the UK, their first axial-flow engine, the Metrovick F.2, ran in 1941 and was first flown in 1943. Although more powerful than the centrifugal designs at the time, the Ministry considered its complexity and unreliability a drawback in wartime. The work at Metrovick led to the Armstrong Siddeley Sapphire engine which would be built in the US as the J65.

Following the end of the war the German jet aircraft and jet engines were extensively studied by the victorious allies and contributed to work on early Soviet and US jet fighters. The legacy of the axial-flow engine is seen in the fact that practically all jet engines on fixed wing aircraft have had some inspiration from this design.

Centrifugal-flow engines have improved since their introduction. With improvements in bearing technology the shaft speed of the engine was increased, greatly reducing the

Page 51: AEROSPACE

diameter of the centrifugal compressor. The short engine length remains an advantage of this design, particularly for use in helicopters where overall size is more important than frontal area. Also, its engine components are robust; axial-flow compressors are more liable to foreign object damage.

Although German designs were more advanced aerodynamically, the combination of simplicity and advanced British metallurgy meant that Whittle-derived designs were far more reliable than their German counterparts. British engines also were licensed widely in the US (see Tizard Mission),and were sold to the USSR who reverse engineered them with the Nene going on to power the famous MiG-15. American and Soviet designs, independent axial-flow types for the most part, would not come fully into their own until the 1960s, although the General Electric J47 provided excellent service in the F-86 Sabre in the 1950s.

By the 1950s the jet engine was almost universal in combat aircraft, with the exception of cargo, liaison and other specialty types. By this point some of the British designs were already cleared for civilian use, and had appeared on early models like the de Havilland Comet and Canadair Jetliner. By the 1960s all large civilian aircraft were also jet powered, leaving the piston engine in niche roles here as well.

Relentless improvements in the turboprop pushed the piston engine out of the mainstream entirely, leaving it serving only the smallest general aviation designs, and some use in drone aircraft. The ascension of the jet engine to almost universal use in aircraft took well under twenty years.

However, the story was not quite at an end, for the efficiency of turbojet engines was still rather worse than piston engines, but by the 1970s with the advent of high bypass jet engines, an innovation not foreseen by the early commentators like Edgar Buckingham, at high speeds and high altitudes that seemed absurd to them, only then did the fuel efficiency finally exceeded that of the best piston and propeller engines, and the dream of fast, safe, economical travel around the world finally arrived, and their dour, if well founded for the time, predictions that jet engines would never amount to much, killed forever.

Types

There are a large number of different types of jet engines, all of which achieve propulsion from a high speed exhaust jet.

Type Description Advantages Disadvantages

Water jet For propelling boats; squirts water out the back

Can run in shallow water, high acceleration, no risk of

Can be less efficient than a propeller at low speed, more expensive, higher

Page 52: AEROSPACE

through a nozzle

engine overload (unlike propellers), less noise and vibration, highly manoeuvrable at all boat speeds, high speed efficiency, less vulnerable to damage from debris, very reliable, more load flexibility, less harmful to wildlife

weight in boat due to entrained water, will not perform well if boat is heavier than the jet is sized for

Motorjet

Most primitive airbreathing jet engine. Essentially a supercharged piston engine with a jet exhaust.

Higher exhaust velocity than a propeller, offering better thrust at high speed

Heavy, inefficient and underpowered

TurbojetGeneric term for simple turbine engine

Simplicity of design, efficient at supersonic speeds (~M2)

A basic design, misses many improvements in efficiency and power for subsonic flight, relatively noisy.

Low-bypass Turbofan

One- or two-stage fan added in front bypasses a proportion of the air through a bypass chamber surrounding the core. Compared with its turbojet ancestor, this allows for more efficient operation with somewhat less noise. This is the engine of high-speed military aircraft, some smaller private jets, and older civilian airliners such as

As with the turbojet, the design is aerodynamic, with only a modest increase in diameter over the turbojet required to accommodate the bypass fan and chamber. It is capable of supersonic speeds with minimal thrust drop-off at high speeds and altitudes yet still more efficient than the turbojet at subsonic operation.

Noisier and less efficient than high-bypass turbofan, with less static (Mach 0) thrust. Added complexity to accommodate dual shaft designs. More inefficient than a turbojet around M2 due to higher cross-sectional area.

Page 53: AEROSPACE

the Boeing 707, the McDonnell Douglas DC-8, and their derivatives.

High-bypass Turbofan

First stage compressor drastically enlarged to provide bypass airflow around engine core, and it provides significant amounts of thrust. Compared to the low-bypass turbofan and no-bypass turbojet, the high-bypass turbfan works on the principle of moving a great deal of air somewhat faster, rather than a small amount extremely fast. This translates into less noise. Most common form of jet engine in civilian use today- used in airliners like the Boeing 747, most 737s, and all Airbus aircraft.

Quieter due to greater mass flow and lower total exhaust speed, more efficient for a useful range of subsonic airspeeds for same reason, cooler exhaust temperature. High bypass variants exhibit good fuel economy.

Greater complexity (additional ducting, usually multiple shafts) and the need to contain heavy blades. Fan diameter can be extremely large, especially in high bypass turbofans such as the GE90. More subject to FOD and ice damage. Top speed is limited due to the potential for shockwaves to damage engine. Thrust lapse at higher speeds, which necessitates huge diameters and introduces additional drag.

Rocket Carries all propellants and oxidants on-board, emits jet for propulsion

Very few moving parts, Mach 0 to Mach 25+, efficient at very high speed (> Mach 10.0 or so), thrust/weight ratio over 100, no complex air inlet, high compression ratio, very high speed (hypersonic) exhaust,

Needs lots of propellant- very low specific impulse — typically 100-450 seconds. Extreme thermal stresses of combustion chamber can make reuse harder. Typically requires carrying oxidiser on-board which increases

Page 54: AEROSPACE

good cost/thrust ratio, fairly easy to test, works in a vacuum-indeed works best exoatmospheric which is kinder on vehicle structure at high speed, fairly small surface area to keep cool, and no turbine in hot exhaust stream.

risks. Extraordinarily noisy.

Ramjet

Intake air is compressed entirely by speed of oncoming air and duct shape (divergent)

Very few moving parts, Mach 0.8 to Mach 5+, efficient at high speed (> Mach 2.0 or so), lightest of all air-breathing jets (thrust/weight ratio up to 30 at optimum speed), cooling much easier than turbojets as no turbine blades to cool.

Must have a high initial speed to function, inefficient at slow speeds due to poor compression ratio, difficult to arrange shaft power for accessories, usually limited to a small range of speeds, intake flow must be slowed to subsonic speeds, noisy, fairly difficult to test, finicky to keep lit.

Turboprop (Turboshaft similar)

Strictly not a jet at all — a gas turbine engine is used as powerplant to drive propeller shaft (or rotor in the case of a helicopter)

High efficiency at lower subsonic airspeeds (300 knots plus), high shaft power to weight

Limited top speed (aeroplanes), somewhat noisy, complex transmission

Propfan/Unducted Fan

Turboprop engine drives one or more propellers. Similar to a turbofan without the fan cowling.

Higher fuel efficiency, potentially less noisy than turbofans, could lead to higher-speed commercial aircraft, popular in the 1980s during fuel shortages

Development of propfan engines has been very limited, typically more noisy than turbofans, complexity

Pulsejet Air is compressed and combusted intermittently

Very simple design, commonly used on model aircraft

Noisy, inefficient (low compression ratio), works poorly on a large

Page 55: AEROSPACE

instead of continuously. Some designs use valves.

scale, valves on valved designs wear out quickly

Pulse detonation engine

Similar to a pulsejet, but combustion occurs as a detonation instead of a deflagration, may or may not need valves

Maximum theoretical engine efficiency

Extremely noisy, parts subject to extreme mechanical fatigue, hard to start detonation, not practical for current use

Air-augmented rocket

Essentially a ramjet where intake air is compressed and burnt with the exhaust from a rocket

Mach 0 to Mach 4.5+ (can also run exoatmospheric), good efficiency at Mach 2 to 4

Similar efficiency to rockets at low speed or exoatmospheric, inlet difficulties, a relatively undeveloped and unexplored type, cooling difficulties, very noisy, thrust/weight ratio is similar to ramjets.

Scramjet

Similar to a ramjet without a diffuser; airflow through the entire engine remains supersonic

Few mechanical parts, can operate at very high Mach numbers (Mach 8 to 15) with good efficiencies

Still in development stages, must have a very high initial speed to function (Mach >6), cooling difficulties, very poor thrust/weight ratio (~2), extreme aerodynamic complexity, airframe difficulties, testing difficulties/expense

Turborocket

A turbojet where an additional oxidizer such as oxygen is added to the airstream to increase maximum altitude

Very close to existing designs, operates in very high altitude, wide range of altitude and airspeed

Airspeed limited to same range as turbojet engine, carrying oxidizer like LOX can be dangerous. Much heavier than simple rockets.

Precooled jets / LACE

Intake air is chilled to very low temperatures at inlet in a heat

Easily tested on ground. Very high thrust/weight ratios are possible (~14)

Exists only at the lab prototyping stage. Examples include RB545, SABRE, ATREX. Requires

Page 56: AEROSPACE

exchanger before passing through a ramjet or turbojet engine. Can be combined with a rocket engine for orbital insertion.

together with good fuel efficiency over a wide range of airspeeds, mach 0-5.5+; this combination of efficiencies may permit launching to orbit, single stage, or very rapid, very long distance intercontinental travel.

liquid hydrogen fuel which has very low density and heavily insulated tankage.

All jet engines are reaction engines that generate thrust by emitting a jet of fluid rearwards at relatively high speed. The forces on the inside of the engine needed to create this jet give a strong thrust on the engine which pushes the craft forwards.

Jet engines make their jet from propellant from tankage that is attached to the engine (as in a 'rocket') or from sucking in external fluid (very typically air) and expelling it at higher speed; or more commonly, a combination of the two sources.

Thrust

The motion impulse of the engine is equal to the fluid mass multiplied by the speed at which the engine emits this mass:

I = m c

where m is the fluid mass per second and c is the exhaust speed. In other words, a vehicle gets the same thrust if it outputs a lot of exhaust very slowly, or a little exhaust very quickly.

However, when an vehicle moves with certain velocity v, the fluid moves towards it, creating an opposing ram drag at the intake:

m v

Most types of jet engine have an intake, which provides the bulk of the fluid exiting the exhaust. Conventional rocket motors, however, do not have an intake, the oxidizer and fuel both being carried within the vehicle. Therefore, rocket motors do not have ram drag; the gross thrust of the nozzle is the net thrust of the engine. Consequently, the thrust characteristics of a rocket motor are completely different from that of an air breathing jet engine.

Page 57: AEROSPACE

The jet engine with an intake is only useful if the velocity of the gas from the engine, c, is greater than the vehicle velocity, v, as the net engine thrust is the same as if the gas were emitted with the velocity c-v. So the thrust is actually equal to

S = m (c-v)

Energy efficiency

For all jet engines the propulsive efficiency (essentially energy efficiency) is highest when the engine emits an exhaust jet at a speed that is the same as, or nearly the same as, the vehicle velocity. The exact formula for air-breathing engines as given in the literature,

Noise

Noise is due to shockwaves that form when the exhaust jet interacts with the external air.

The intensity of the noise is proportional to the thrust as well as proportional to the fourth power of the jet velocity.

Generally then, the lower speed exhaust jets emitted from engines such as high bypass turbofans are the quietest, whereas the fastest jets are the loudest.

Although some variation in jet speed can often be arranged from a jet engine (such as by throttling back and adjusting the nozzle) it is difficult to vary the jet speed from an engine over a very wide range. Therefore since engines for supersonic vehicles such as Concorde, military jets and rockets inherently need to have supersonic exhaust at top speed, so these vehicles are especially noisy even at low speeds.

Common types

A turbojet engine is a type of internal combustion engine often used to propel aircraft. Air is drawn into the rotating compressor via the intake and is compressed, through successive stages, to a higher pressure before entering the combustion chamber. Fuel is mixed with the compressed air and ignited by flame in the eddy of a flame holder. This combustion process significantly raises the temperature and volume of the air. Hot combustion products leaving the combustor expand through a gas turbine, where power is extracted to drive the compressor. This expansion process reduces both the gas temperature and pressure but sufficient fuel is burnt so that both parameters are usually still well above ambient conditions at exit from the turbine. The gas stream is then expanded to ambient pressure via a propelling nozzle, producing a high velocity jet as the exhaust. If the jet velocity exceeds the aircraft flight velocity, there is a net forward thrust upon the airframe.

Page 58: AEROSPACE

Under normal circumstances, the pumping action of the compressor prevents any backflow, thus facilitating the continuous-flow process of the engine. Indeed, the entire process is similar to a four-stroke cycle, but with induction, compression, ignition, expansion and exhaust taking place simultaneously, but in different sections of the engine. The efficiency of a jet engine is strongly dependent upon the overall pressure ratio (combustor entry pressure/intake delivery pressure) and the turbine inlet temperature of the cycle.

It is also perhaps instructive to compare turbojet engines with propeller engines. Turbojet engines take a relatively small mass of air and accelerate it by a large amount, whereas a propeller takes a large mass of air and accelerates it by a small amount. The high-speed exhaust of a turbojet engine makes it efficient at high speeds (especially supersonic speeds) and high altitudes. On slower aircraft and those required to fly short stages, a gas turbine-powered propeller engine, commonly known as a turboprop, is more common and much more efficient. Very small aircraft generally use conventional piston engines to drive a propeller but small turboprops are getting smaller as engineering technology improves.

The turbojet described above is a single-spool design, in which a single shaft connects the turbine to the compressor. Two spool designs have two concentric turbine-compressor systems, that spin independently with the turbine and compressors for each section connected from opposite ends of the engine via concentric shafts. This allows for a higher compression ratio as well as improved compressor stability during engine throttle movements. Three spool designs also exist.

Turbofan engines

Most modern jet engines are actually turbofans, where the low pressure compressor acts as a fan, supplying supercharged air not only to the engine core, but to a bypass duct. The bypass airflow either passes to a separate 'cold nozzle' or mixes with low pressure turbine exhaust gases, before expanding through a 'mixed flow nozzle'.

Turbofans are used for airliners because they give an exhaust speed that is better matched to subsonic airliner's flight speed, conventional turbojet engines generate an exhaust that ends up travelling very fast backwards, and this wastes energy. By emitting the exhaust so that it ends up travelling more slowly, better fuel consumption is achieved. In addition, the lower exhaust speed gives much lower noise.

In the 1960s there was little difference between civil and military jet engines, apart from the use of afterburning in some (supersonic) applications. Civil turbofans today have a low exhaust speed (low specific thrust -net thrust divided by airflow) to keep jet noise to a minimum and to improve fuel efficiency. Consequently the bypass ratio (bypass flow divided by core flow) is relatively high (ratios from 4:1 up to 8:1 are common). Only a

Page 59: AEROSPACE

single fan stage is required, because a low specific thrust implies a low fan pressure ratio.

Today's military turbofans, however, have a relatively high specific thrust, to maximize the thrust for a given frontal area, jet noise being of less concern in military uses relative to civil uses. Multistage fans are normally needed to reach the relatively high fan pressure ratio needed for high specific thrust. Although high turbine inlet temperatures are often employed, the bypass ratio tends to be low, usually significantly less than 2.0.

An approximate equation for calculating the net thrust of a jet engine, be it a turbojet or a mixed turbofan, is:

where:

intake mass flow rate

fully expanded jet velocity (in the exhaust plume)

aircraft flight velocity

While the term represents the gross thrust of the nozzle, the term represents the ram drag of the intake.

Rocket engines

The third most common form of jet engine is the rocket engine.

Rocket engines are used for rockets because their extremely high exhaust velocity and independence from the atmospheric oxygen permits them to achieve spaceflight.

This is used for launching satellites, space exploration and manned access, and permitted landing on the moon in 1969.

However, the high exhaust speed and the heavier propellant mass results in less efficient flight than turbojets, and their use is largely restricted to very high altitudes or where very high accelerations are needed as rocket engines themselves have a very high thrust-to-weight ratio.

Major components

Page 60: AEROSPACE

The major components of a jet engine are similar across the major different types of engines, although not all engine types have all components. The major parts include:

Cold Section: o Air intake (Inlet) — The standard reference frame for a jet engine is the

aircraft itself. For subsonic aircraft, the air intake to a jet engine presents no special difficulties, and consists essentially of an opening which is designed to minimise drag, as with any other aircraft component. However, the air reaching the compressor of a normal jet engine must be travelling below the speed of sound, even for supersonic aircraft, to sustain the flow mechanics of the compressor and turbine blades. At supersonic flight speeds, shockwaves form in the intake system and reduce the recovered pressure at inlet to the compressor. So some supersonic intakes use devices, such as a cone or ramp, to increase pressure recovery, by making more efficient use of the shock wave system.

o Compressor or Fan — The compressor is made up of stages. Each stage consists of vanes which rotate, and stators which remain stationary. As air is drawn deeper through the compressor, its heat and pressure increases. Energy is derived from the turbine (see below), passed along the shaft.

o Bypass ducts much of the thrust of essentially all modern jet engines comes from air from the front compressor that bypasses the combustion chamber and gas turbine section that leads directly to the nozzle or afterburner (where fitted).

Common: o Shaft — The shaft connects the turbine to the compressor, and runs

most of the length of the engine. There may be as many as three concentric shafts, rotating at independent speeds, with as many sets of turbines and compressors. Other services, like a bleed of cool air, may also run down the shaft.

Hot section: o Combustor or Can or Flameholders or Combustion Chamber — This is a

chamber where fuel is continuously burned in the compressed air.o Turbine — The turbine is a series of bladed discs that act like a windmill,

gaining energy from the hot gases leaving the combustor. Some of this energy is used to drive the compressor, and in some turbine engines (ie turboprop, turboshaft or turbofan engines), energy is extracted by additional turbine discs and used to drive devices such as propellers, bypass fans or helicopter rotors. One type, a free turbine, is configured such that the turbine disc driving the compressor rotates independently of the discs that power the external components. Relatively cool air, bled from the compressor, may be used to cool the turbine blades and vanes, to prevent them from melting.

Page 61: AEROSPACE

o Afterburner or reheat (chiefly UK) — (mainly military) Produces extra thrust by burning extra fuel, usually inefficiently, to significantly raise Nozzle Entry Temperature at the exhaust. Owing to a larger volume flow (i.e. lower density) at exit from the afterburner, an increased nozzle flow area is required, to maintain satisfactory engine matching, when the afterburner is alight.

o Exhaust or Nozzle — Hot gases leaving the engine exhaust to atmospheric pressure via a nozzle, the objective being to produce a high velocity jet. In most cases, the nozzle is convergent and of fixed flow area.

o Supersonic nozzle — If the Nozzle Pressure Ratio (Nozzle Entry Pressure/Ambient Pressure) is very high, to maximize thrust it may be worthwhile, despite the additional weight, to fit a convergent-divergent (de Laval) nozzle. As the name suggests, initially this type of nozzle is convergent, but beyond the throat (smallest flow area), the flow area starts to increase to form the divergent portion. The expansion to atmospheric pressure and supersonic gas velocity continues downstream of the throat, whereas in a convergent nozzle the expansion beyond sonic velocity occurs externally, in the exhaust plume. The former process is more efficient than the latter.

The various components named above have constraints on how they are put together to generate the most efficiency or performance. The performance and efficiency of an engine can never be taken in isolation; for example fuel/distance efficiency of a supersonic jet engine maximises at about mach 2, whereas the drag for the vehicle carrying it is increasing as a square law and has much extra drag in the transonic region. The highest fuel efficiency for the overall vehicle is thus typically at Mach ~0.85.

For the engine optimisation for its intended use, important here is air intake design, overall size, number of compressor stages (sets of blades), fuel type, number of exhaust stages, metallurgy of components, amount of bypass air used, where the bypass air is introduced, and many other factors. For instance, let us consider design of the air intake.

Air intakes

Pitot intakes are the dominant type for subsonic applications. A subsonic pitot inlet is little more than a tube with an aerodynamic fairing around it.

At zero airspeed (i.e., rest), air approaches the intake from a multitude of directions: from directly ahead, radially, or even from behind the plane of the intake lip.

At low airspeeds, the streamtube approaching the lip is larger in cross-section than the lip flow area, whereas at the intake design flight Mach number the two flow areas are

Page 62: AEROSPACE

equal. At high flight speeds the streamtube is smaller, with excess air spilling over the lip.

Beginning around 0.85 Mach, shock waves can occur as the air accelerates through the intake throat.

Careful radiusing of the lip region is required to optimize intake pressure recovery (and distortion) throughout the flight envelope.

Supersonic inlets

Supersonic intakes exploit shock waves to decelerate the airflow to a subsonic condition at compressor entry.

There are basically two forms of shock waves:

1) Normal shock waves lie perpendicular to the direction of the flow. These form sharp fronts and shock the flow to subsonic speeds. Microscopically the air molecules smash into the subsonic crowd of molecules like alpha rays. Normal shock waves tend to cause a large drop in stagnation pressure. Basically, the higher the supersonic entry Mach number to a normal shock wave, the lower the subsonic exit Mach number and the stronger the shock (i.e. the greater the loss in stagnation pressure across the shock wave).

2) Conical (3-dimensional) and oblique shock waves (2D) are angled rearwards, like the bow wave on a ship or boat, and radiate from a flow disturbance such as a cone or a ramp. For a given inlet Mach number, they are weaker than the equivalent normal shock wave and, although the flow slows down, it remains supersonic throughout. Conical and oblique shock waves turn the flow, which continues in the new direction, until another flow disturbance is encountered downstream.

Note: Comments made regarding 3 dimensional conical shock waves, generally also apply to 2D oblique shock waves.

A sharp-lipped version of the pitot intake, described above for subsonic applications, performs quite well at moderate supersonic flight speeds. A detached normal shock wave forms just ahead of the intake lip and 'shocks' the flow down to a subsonic velocity. However, as flight speed increases, the shock wave becomes stronger, causing a larger percentage decrease in stagnation pressure (i.e. poorer pressure recovery). An early US supersonic fighter, the F-100 Super Sabre, used such an intake.

More advanced supersonic intakes, excluding pitots:

Page 63: AEROSPACE

a) exploit a combination of conical shock wave/s and a normal shock wave to improve pressure recovery at high supersonic flight speeds. Conical shock wave/s are used to reduce the supersonic Mach number at entry to the normal shock wave, thereby reducing the resultant overall shock losses.

b) have a design shock-on-lip flight Mach number, where the conical/oblique shock wave/s intercept the cowl lip, thus enabling the streamtube capture area to equal the intake lip area. However, below the shock-on-lip flight Mach number, the shock wave angle/s are less oblique, causing the streamline approaching the lip to be deflected by the presence of the cone/ramp. Consequently, the intake capture area is less than the intake lip area, which reduces the intake airflow. Depending on the airflow characteristics of the engine, it may be desirable to lower the ramp angle or move the cone rearwards to refocus the shockwaves onto the cowl lip to maximise intake airflow.

c) are designed to have a normal shock in the ducting downstream of intake lip, so that the flow at compressor/fan entry is always subsonic. However, if the engine is throttled back, there is a reduction in the corrected airflow of the LP compressor/fan, but (at supersonic conditions) the corrected airflow at the intake lip remains constant, because it is determined by the flight Mach number and intake incidence/yaw. This discontinuity is overcome by the normal shock moving to a lower cross-sectional area in the ducting, to decrease the Mach number at entry to the shockwave. This weakens the shockwave, improving the overall intake pressure recovery. So, the absolute airflow stays constant, whilst the corrected airflow at compressor entry falls (because of a higher entry pressure). Excess intake airflow may also be dumped overboard or into the exhaust system, to prevent the conical/oblique shock waves being disturbed by the normal shock being forced too far forward by engine throttling.

Many second generation supersonic fighter aircraft featured an inlet cone, which was used to form the conical shock wave. This type of inlet cone is clearly seen at the very front of the English Electric Lightning and MiG-21 aircraft, for example.

The same approach can be used for air intakes mounted at the side of the fuselage, where a half cone serves the same purpose with a semicircular air intake, as seen on the F-104 Starfighter and BAC TSR-2.

Some intakes are biconic; that is they feature two conical surfaces: the first cone is supplemented by a second, less oblique, conical surface, which generates an extra conical shockwave, radiating from the junction between the two cones. A biconic intake is usually more efficient than the equivalent conical intake, because the entry Mach number to the normal shock is reduced by the presence of the second conical shock wave.

A very sophisticated conical intake was featured on the SR-71's Pratt & Whitney J58s that could move a conical spike fore and aft within the engine nacelle, preventing the

Page 64: AEROSPACE

shockwave formed on the spike from entering the engine and stalling the engine, while keeping it close enough to give good compression. Movable cones are uncommon.

A more sophisticated design than cones is to angle the intake so that one of its edges forms a ramp. An oblique shockwave will form at the start of the ramp. The Century Series of US jets featured several variants of this approach, usually with the ramp at the outer vertical edge of the intake, which was then angled back inward towards the fuselage. Typical examples include the Republic F-105 Thunderchief and F-4 Phantom.

Later this evolved so that the ramp was at the top horizontal edge rather than the outer vertical edge, with a pronounced angle downwards and rearwards. This design simplified the construction of intakes and allowed use of variable ramps to control airflow into the engine. Most designs since the early 1960s now feature this style of intake, for example the F-14 Tomcat, Panavia Tornado and Concorde.

From another point of view, like in a supersonic nozzle the corrected (or non-dimensional) flow has to be the same at the intake lip, at the intake throat and at the turbine. One of this three can be fixed. For inlets the throat is made variable and some air is bypassed around the turbine and directly fed into the afterburner. Unlike in a nozzle the inlet is either unstable or inefficient, because a normal shock wave in the throat will suddenly move to the lip, thereby increasing the pressure at the lip, leading to drag and reducing the pressure recovery, leading to turbine surge and the loss of one SR-71.

Compressors

Axial compressors rely on spinning blades that have aerofoil sections, similar to aeroplane wings. As with aeroplane wings in some conditions the blades can stall. If this happens, the airflow around the stalled compressor can reverse direction violently. Each design of a compressor has an associated operating map of airflow versus rotational speed for characteristics peculiar to that type (see compressor map).

At a given throttle condition, the compressor operates somewhere along the steady state running line. Unfortunately, this operating line is displaced during transients. Many compressors are fitted with anti-stall systems in the form of bleed bands or variable geometry stators to decrease the likelihood of surge. Another method is to split the compressor into two or more units, operating on separate concentric shafts.

Another design consideration is the average stage loading. This can be kept at a sensible level either by increasing the number of compression stages (more weight/cost) or the mean blade speed (more blade/disc stress).

Although large flow compressors are usually all-axial, the rear stages on smaller units are too small to be robust. Consequently, these stages are often replaced by a single

Page 65: AEROSPACE

centrifugal unit. Very small flow compressors often employ two centrifugal compressors, connected in series. Although in isolation centrifugal compressors are capable of running at quite high pressure ratios (e.g. 10:1), impeller stress considerations limit the pressure ratio that can be employed in high overall pressure ratio engine cycles.

Increasing overall pressure ratio implies raising the high pressure compressor exit temperature. This implies a higher high pressure shaft speed, to maintain the datum blade tip Mach number on the rear compressor stage. Stress considerations, however, may limit the shaft speed increase, causing the original compressor to throttle-back aerodynamically to a lower pressure ratio than datum.

Combustors

Great care must be taken to keep the flame burning in a moderately fast moving airstream, at all throttle conditions, as efficiently as possible. Since the turbine cannot withstand stoichiometric temperatures (a mixture ratio of around 15:1), some of the compressor air is used to quench the exit temperature of the combustor to an acceptable level (an overall mixture ratio of between 45:1 and 130:1 is used). Air used for combustion is considered to be primary airflow, while excess air used for cooling is called secondary airflow. Combustor configurations include can, annular, and can-annular.

Turbines

Because a turbine expands from high to low pressure, there is no such thing as turbine surge or stall. The turbine needs fewer stages than the compressor, mainly because the higher inlet temperature reduces the deltaT/T (and thereby the pressure ratio) of the expansion process. The blades have more curvature and the gas stream velocities are higher.

Designers must, however, prevent the turbine blades and vanes from melting in a very high temperature and stress environment. Consequently bleed air extracted from the compression system is often used to cool the turbine blades/vanes internally. Other solutions are improved materials and/or special insulating coatings. The discs must be specially shaped to withstand the huge stresses imposed by the rotating blades. They take the form of impulse, reaction, or combination impulse-reaction shapes. Improved materials help to keep disc weight down.

Turbopumps

Turbopumps are centrifugal pumps which are spun by gas turbines and are used to raise the propellant pressure above the pressure in the combustion chamber so that it can be injected and burnt. Turbopumps are very commonly used with rockets, but ramjets and turbojets also have been known to use them.

Page 66: AEROSPACE

Due to temperature limitations with the gas turbines, jet engines do not consume all the oxygen in the air ('run stoichiometric'). Afterburners burn the remaining oxygen after exiting the turbines, but usually do so inefficiently due to the low pressures typically found at this part of the jet engine; however this gains significant thrust, which can be useful. Engines intended for extended use with afterburners often have variable nozzles and other details.

Nozzles

The primary objective of a nozzle is to expand the exhaust stream to atmospheric pressure, and form it into a high speed jet to propel the vehicle. For airbreathing engines, if the fully expanded jet has a higher speed than the aircraft's airspeed, then there is a net rearward momentum gain to the air and there will be a forward thrust on the airframe.

Simple convergent nozzles are used on many jet engines. If the nozzle pressure ratio is above the critical value (about 1.8:1) a convergent nozzle will choke, resulting in some of the expansion to atmospheric pressure taking place downstream of the throat (i.e. smallest flow area), in the jet wake. Although much of the gross thrust produced will still be from the jet momentum, additional (pressure) thrust will come from the imbalance between the throat static pressure and atmospheric pressure.

Many military combat engines incorporate an afterburner (or reheat) in the engine exhaust system. When the system is lit, the nozzle throat area must be increased, to accommodate the extra exhaust volume flow, so that the turbomachinery is unaware that the afterburner is lit. A variable throat area is achieved by moving a series of overlapping petals, which approximate the circular nozzle cross-section.

At high nozzle pressure ratios, the exit pressure is often above ambient and much of the expansion will take place downstream of a convergent nozzle, which is inefficient. Consequently, some jet engines (notably rockets) incorporate a convergent-divergent nozzle, to allow most of the expansion to take place against the inside of a nozzle to maximise thrust. However, unlike the fixed con-di nozzle used on a conventional rocket motor, when such a device is used on a turbojet engine it has to be a complex variable geometry device, to cope with the wide variation in nozzle pressure ratio encountered in flight and engine throttling. This further increases the weight and cost of such an installation.

The simpler of the two is the ejector nozzle, which creates an effective nozzle through a secondary airflow and spring-loaded petals. At subsonic speeds, the airflow constricts the exhaust to a convergent shape. As the aircraft speeds up, the two nozzles dilate, which allows the exhaust to form a convergent-divergent shape, speeding the exhaust gasses past Mach 1. More complex engines can actually use a tertiary airflow to reduce exit area at very low speeds. Advantages of the ejector nozzle are relative simplicity and

Page 67: AEROSPACE

reliability. Disadvantages are average performance (compared to the other nozzle type) and relatively high drag due to the secondary airflow. Notable aircraft to have utilized this type of nozzle include the SR-71, Concorde, F-111, and Saab Viggen

For higher performance, it is necessary to use an iris nozzle. This type uses overlapping, hydraulically adjustable "petals". Although more complex than the ejector nozzle, it has significantly higher performance and smoother airflow. As such, it is employed primarily on high-performance fighters such as the F-14, F-15, F-16, though is also used in high-speed bombers such as the B-1B. Some modern iris nozzles additionally have the ability to change the angle of the thrust (see thrust vectoring).

Rocket motors also employ convergent-divergent nozzles, but these are usually of fixed geometry, to minimize weight. Because of the much higher nozzle pressure ratios experienced, rocket motor con-di nozzles have a much greater area ratio (exit/throat) than those fitted to jet engines. The Convair F-106 Delta Dart has used such a nozzle design, as part of its overall design specification as a aerospace interceptor for high-altitude bomber interception, where conventional nozzle design would prove ineffective.

At the other extreme, some high bypass ratio civil turbofans use an extremely low area ratio (less than 1.01 area ratio), convergent-divergent, nozzle on the bypass (or mixed exhaust) stream, to control the fan working line. The nozzle acts as if it has variable geometry. At low flight speeds the nozzle is unchoked (less than a Mach number of unity), so the exhaust gas speeds up as it approaches the throat and then slows down slightly as it reaches the divergent section. Consequently, the nozzle exit area controls the fan match and, being larger than the throat, pulls the fan working line slightly away from surge. At higher flight speeds, the ram rise in the intake increases nozzle pressure ratio to the point where the throat becomes choked (M=1.0). Under these circumstances, the throat area dictates the fan match and being smaller than the exit pushes the fan working line slightly towards surge. This is not a problem, since fan surge margin is much better at high flight speeds.

Thrust reversers

These either consist of cups that swing across the end of the nozzle and deflect the jet thrust forwards (as in the DC-9), or they are two panels behind the cowling that slide backward and reverse only the fan thrust (the fan produces the majority of the thrust). This is the case on many large aircraft such as the 747, C-17, KC-135, etc.

Cooling systems

All jet engines require high temperature gas for good efficiency, typically achieved by combusting hydrocarbon or hydrogen fuel. Combustion temperatures can be as high as

Page 68: AEROSPACE

3500K (5841F) in rockets, far above the melting point of most materials, but normal airbreathing jet engines use rather lower temperatures.

Cooling systems are employed to keep the temperature of the solid parts below the failure temperature.

Air systems

A complex around combustor and is injected into the rim of the rotating turbine disc. The cooling air then passes through complex passages within the turbine blades. After removing heat from the blade material, the air (now fairly hot) is vented, via cooling holes, into the main gas stream. Cooling air for the turbine vanes undergoes a similar process.

Cooling the leading edge of the blade can be difficult, because the pressure of the cooling air just inside the cooling hole may not be much different from that of the oncoming gas stream. One solution is to incorporate a cover plate on the disc. This acts as a centrifugal compressor to pressurize the cooling air before it enters the blade. Another solution is to use an ultra-efficient turbine rim seal to pressurize the area where the cooling air passes across to the rotating disc.

Seals are used to prevent oil leakage, control air for cooling and prevent stray air flows into turbine cavities.

A series of (e.g. labyrinth) seals allow a small flow of bleed air to wash the turbine disc to extract heat and, at the same time, pressurize the turbine rim seal, to prevent hot gases entering the inner part of the engine. Other types of seals are hydraulic, brush, carbon etc.

Small quantities of compressor bleed air are also used to cool the shaft, turbine shrouds, etc. Some air is also used to keep the temperature of the combustion chamber walls below critical. This is done using primary and secondary airholes which allow a thin layer of air to cover the inner walls of the chamber preventing excessive heating.

Exit temperature is dependent on the turbine upper temperature limit depending on the material. Reducing the temperature will also prevent thermal fatigue and hence failure. Accessories may also need their own cooling systems using air from the compressor or outside air.

Air from compressor stages is also used for heating of the fan, airframe anti-icing and for cabin heat. Which stage is bled from depends on the atmospheric conditions at that altitude.

Fuel system

Page 69: AEROSPACE

Apart from providing fuel to the engine, the fuel system is also used to control propeller speeds, compressor airflow and cool lubrication oil. Fuel is usually introduced by an atomized spray, the amount of which is controlled automatically depending on the rate of airflow.

So the sequence of events for increasing thrust is, the throttle opens and fuel spray pressure is increased, increasing the amount of fuel being burned. This means that exhaust gases are hotter and so are ejected at higher acceleration, which means they exert higher forces and therefore increase the engine thrust directly. It also increases the energy extracted by the turbine which drives the compressor even faster and so there is an increase in air flowing into the engine as well.

Obviously, it is the rate of the mass of the airflow that matters since it is the change in momentum (mass x velocity) that produces the force. However, density varies with altitude and hence inflow of mass will also vary with altitude, temperature etc. which means that throttle values will vary according to all these parameters without changing them manually.

This is why fuel flow is controlled automatically. Usually there are 2 systems, one to control the pressure and the other to control the flow. The inputs are usually from pressure and temperature probes from the intake and at various points through the engine. Also throttle inputs, engine speed etc. are required. These affect the high pressure fuel pump.

Fuel control unit (FCU)

This element is something like a mechanical computer. It determines the output of the fuel pump by a system of valves which can change the pressure used to cause the pump stroke, thereby varying the amount of flow.

Take the possibility of increased altitude where there will be reduced air intake pressure. In this case, the chamber within the FCU will expand which causes the spill valve to bleed more fuel. This causes the pump to deliver less fuel until the opposing chamber pressure is equivalent to the air pressure and the spill valve goes back to its position.

When the throttle is opened, it releases i.e. lessens the pressure which lets the throttle valve fall. The pressure is transmitted (because of a back-pressure valve i.e. no air gaps in fuel flow) which closes the FCU spill valves (as they are commonly called) which then increases the pressure and causes a higher flow rate.

The engine speed governor is used to prevent the engine from over-speeding. It has the capability of disregarding the FCU control. It does this by use of a diaphragm which senses the engine speed in terms of the centrifugal pressure caused by the rotating

Page 70: AEROSPACE

rotor of the pump. At a critical value, this diaphragm causes another spill valve to open and bleed away the fuel flow.

There are other ways of controlling fuel flow for example with the dash-pot throttle lever. The throttle has a gear which meshes with the control valve (like a rack and pinion) causing it to slide along a cylinder which has ports at various positions. Moving the throttle and hence sliding the valve along the cylinder, opens and closes these ports as designed. There are actually 2 valves viz. the throttle and the control valve. The control valve is used to control pressure on one side of the throttle valve such that it gives the right opposition to the throttle control pressure. It does this by controlling the fuel outlet from within the cylinder.

So for example, if the throttle valve is moved up to let more fuel in, it will mean that the throttle valve has moved into a position which allows more fuel to flow through and on the other side, the required pressure ports are opened to keep the pressure balance so that the throttle lever stays where it is.

At initial acceleration, more fuel is required and the unit is adapted to allow more fuel to flow by opening other ports at a particular throttle position. Changes in pressure of outside air i.e. altitude, speed of aircraft etc are sensed by an air capsule.

Fuel pump

Fuel pumps are used to raise the fuel pressure above the pressure in the combustion chamber so that the fuel can be injected. Fuel pumps are usually driven by the main shaft, via gearing.

Turbopumps are very commonly used with liquid-fuelled rockets and rely on the expansion of an onboard gas through a turbine.

Ramjet turbopumps use ram air expanding through a turbine.

Engine starting system

The fuel system as explained above, is one of the 2 systems required for starting the engine. The other is the actual ignition of the air/fuel mixture in the chamber. Usually, an auxiliary power unit is used to start the engines. It has a starter motor which has a high torque transmitted to the compressor unit. When the optimum speed is reached, i.e. the flow of gas through the turbine is sufficient, the turbines take over. There are a number of different starting methods such as electric, hydraulic, pneumatic etc.

The electric starter works with gears and clutch plate linking the motor and the engine. The clutch is used to disengage when optimum speed is achieved. This is usually done

Page 71: AEROSPACE

automatically. The electric supply is used to start the motor as well as for ignition. The voltage is usually built up slowly as starter gains speed.

Some military aircraft need to be started quicker than the electric method permits and hence they use other methods such as a turbine starter. This is an impulse turbine impacted by burning gases from a cartridge. It is geared to rotate the engine and also connected to an automatic disconnect system. The cartridge is set alight electrically and used to turn the turbine.

Another turbine starter system is almost exactly like a little engine. Again the turbine is connected to the engine via gears. However, the turbine is turned by burning gases - usually the fuel is isopropyl nitrate stored in a tank and sprayed into a combustion chamber. Again, it is ignited with a spark plug. Everything is electrically controlled, such as speed etc.

Most Commercial aircraft and large Military Transport airplanes usually use what is called an auxiliary power unit or APU. It is normally a small gas turbine. Thus, one could say that using such an APU is using a small gas turbine to start a larger one. High pressure air from the compressor section of the APU is bled off through a system of pipes to the engines where it is directed into the starting system. This "bleed air" is directed into a mechanism to start the engine turning and begin pulling in air. When the rotating speed of the engine is sufficient to pull in enough air to support combustion, fuel is introduced and ignited. Once the engine ignites and reaches idle speed, the bleed air is shut off.

The APUs on aircraft such as the Boeing 737 and Airbus A320 can be seen at the extreme rear of the aircraft. This is the typical location for an APU on most commercial airliners although some may be within the wing root (Boeing 727) or the aft fuselage (DC-9/MD80) as examples and some military transports carry their APU's in one of the main landing gear pods (C-141).

The APUs also provide enough power to keep the cabin lights, pressure and other systems on while the engines are off. The valves used to control the airflow are usually electrically controlled. They automatically close at a pre-determined speed. As part of the starting sequence on some engines fuel is combined with the supplied air and burned instead of using just air. This usually produces more power per unit weight.

Usually an APU is started by its own electric starter motor which is switched off at the proper speed automatically. When the main engine starts up and reaches the right conditions, this auxiliary unit is then switched off and disengages slowly.

Hydraulic pumps can also be used to start some engines through gears. The pumps are electrically controlled on the ground.

Page 72: AEROSPACE

A variation of this is the APU installed in a Boeing F/A-18 Hornet; it is started by a hydraulic motor, which itself receives energy stored in an accumulator. This accumulator is recharged after the right engine is started and develops hydraulic pressure, or by a hand pump in the right hand main landing gear well.

Ignition

Usually there are 2 igniter plugs in different positions in the combustion system. A high voltage spark is used to ignite the gases. The voltage is stored up from a low voltage supply provided by the starter system. It builds up to the right value and is then released as a high energy spark. Depending on various conditions, the igniter continues to provide sparks to prevent combustion from failing if the flame inside goes out. Of course, in the event that the flame does go out, there must be provision to relight. There is a limit of altitude and air speed at which an engine can obtain a satisfactory relight.

For example, the General Electric F404-400 uses one ignitor for the combustor and one for the afterburner; the ignition system for the A/B incorporates an ultraviolet flame sensor to activate the ignitor.

It should be noted that most modern ignition systems provide enough energy to be a lethal hazard should a person be in contact with the electrical lead when the system is activated, so team communication is vital when working on these systems.

Lubrication system

A lubrication system serves to ensure lubrication of the bearings and to maintain sufficiently cool temperatures, mostly by eliminating friction.

The lubrication system as a whole should be able to prevent foreign material from entering the plane, and reaching the bearings, gears, and other moving parts. The lubricant must be able to flow easily at relatively low temperatures and not disintegrate or break down at very high temperatures.

Usually the lubrication system has subsystems that deal individually with the pressure of an engine, scavenging, and a breather.

The pressure system components are an oil tank and de-aerator, main oil pump, main oil filter/filter bypass valve, pressure regulating valve (PRV), oil cooler/by pass valve and tubing/jets. Usually the flow is from the tank to the pump inlet and PRV, pumped to main oil filter or its bypass valve and oil cooler, then through some more filters to jets in the bearings.

Page 73: AEROSPACE

Using the PRV method of control, means that the pressure of the feed oil must be below a critical value (usually controlled by other valves which can leak out excess oil back to tank if it exceeds the critical value). The valve opens at a certain pressure and oil is kept moving at a constant rate into the bearing chamber.

If the engine speed increases, the pressure within the bearing chamber also increases, which means the pressure difference between the lubricant feed and the chamber reduces which could reduce slow rate of oil when it is needed even more. As a result, some PRVs can adjust their spring force values using this pressure change in the bearing chamber proportionally to keep the lubricant flow constant.

Advanced designs

J-58 combined ramjet/turbojet

The SR-71's Pratt & Whitney J58 engines were rather unusual. They could convert in flight from being largely a turbojet to being largely a compressor-assisted ramjet. At high speeds (above Mach 2.4), the engine used variable geometry vanes to direct excess air through 6 bypass pipes from downstream of the fourth compressor stage into the afterburner. 80% of the SR-71's thrust at high speed was generated in this way, giving much higher thrust, improving specific impulse by 10-15%, and permitting continuous operation at Mach 3.2. The name coined for this setup is turbo-ramjet.

Hydrogen fuelled jet engines

Jet engines can be run on almost any fuel. Hydrogen is a highly desirable fuel, as, although the energy per mole is not unusually high, the molecule is very much lighter than other molecules. It turns out that the energy per kg of hydrogen is twice that of more common fuels and this gives twice the specific impulse. In addition jet engines running on hydrogen are quite easy to build- the first ever turbojet was run on hydrogen.

However, in almost every other way, hydrogen is problematic. The downside of hydrogen is its density, in gaseous form the tanks are impractical for flight, but even in liquid form it has a density one fourteenth that of water. It is also deeply cryogenic and requires very significant insulation that precludes it being stored in wings. The overall vehicle ends up very large, and they would be difficult for most airports to accommodate. Finally, pure hydrogen is not found in nature, and must be manufactured either via steam reforming or expensive electrolysis. Both are relatively inefficient processes.

Precooled jet engines

Page 74: AEROSPACE

An idea originated by Robert P. Carmichael in 1955 is that hydrogen fuelled engines could theoretically have much higher performance than hydrocarbon fuelled engines if a heat exchanger were used to cool the incoming air. The low temperature allows lighter materials to be used, a higher mass-flow through the engines, and permits combustors to inject more fuel without overheating the engine.

This idea leads to plausible designs like SABRE, that might permit single-stage-to-orbit, and ATREX, that might permit jet engines to be used up to hypersonic speeds and high altitudes for boosters for launch vehicles. The idea is also being researched by the EU for a concept to achieve non-stop antipodal supersonic passenger travel at Mach 5 (Reaction Engines A2).

Nuclear-powered ramjet

Project Pluto was a nuclear-powered ramjet, intended for use in a cruise missile. Rather than combusting fuel as in regular jet engines, air was heated using a high-temperature, unshielded nuclear reactor. This dramatically increased the engine burn time, and the ramjet was predicted to be able to cover any required distance at supersonic speeds (Mach 3 at tree-top height).

However, there was no obvious way to stop it once it had taken off, which would be a great disadvantage in any non-disposable application. Also, because the reactor was unshielded, it was dangerous to be in or around the flight path of the vehicle (although the exhaust itself wasn't radioactive). These disadvantages limit the application to warhead delivery system for all-out nuclear war, which it was being designed for.

Scramjets

Scramjets are an evolution of ramjets that are able to operate at much higher speeds than any other kind of airbreathing engine. They share a similar structure with ramjets, being a specially-shaped tube that compresses air with no moving parts through ram-air compression. Scramjets, however, operate with supersonic airflow through the entire engine. Thus, scramjets do not have the diffuser required by ramjets to slow the incoming airflow to subsonic speeds.

Scramjets start working at speeds of at least Mach 4, and have a maximum useful speed of approximately Mach 17. Due to aerodynamic heating at these high speeds, cooling poses a challenge to engineers.

Environmental considerations

Jet engines are usually run on fossil fuel propellant, and in that case, are a net source of carbon to the atmosphere.

Page 75: AEROSPACE

Some scientists believe that jet engines are also a source of global dimming due to the water vapour in the exhaust causing cloud formations.

Nitrogen compounds are also formed from the combustion process from atmospheric nitrogen. At low altitudes this is not thought to be especially harmful, but for supersonic aircraft that fly in the stratosphere some destruction of ozone may occur.

Sulphates are also emitted if the fuel contains sulphur.

Safety and reliability

Jet engines are usually very reliable and have a very good safety record. However failures do sometimes occur.

One class of failures that has caused accidents in particular is uncontained failures, where rotary parts of the engine break off and exit through the case. These can cut fuel or control lines, and can penetrate the cabin. Although fuel and control lines are usually duplicated for reliability the United Airlines Flight 232 was caused when all control lines were simultaneously severed.

The most likely failure is compressor blade failure, and modern jet engines are designed with structures that can catch these blades and keep them contained them within the engine casing. Verification of a jet engine design involves testing that this system works correctly.

Bird strike

Bird strike is an aviation term for a collision between a bird and an aircraft. It is a common threat to aircraft safety and has caused a number of fatal accidents. In 1988 an Ethiopian Airlines Boeing 737 sucked pigeons into both engines during take-off and then crashed in an attempt to return to the Bahir Dar airport; of the 104 people aboard, 35 died and 21 were injured. In another incident in 1995, a Dassault Falcon 20 crashed at a Paris airport during an emergency landing attempt after sucking lapwings into an engine, which caused an engine failure and a fire in the airplane fuselage; all 10 people on board were killed.

Modern jet engines have the capability of surviving an ingestion of a bird. Small fast planes, such as military jet fighters, are at higher risk than big heavy multi-engine ones. This is due to the fact that the fan of a high-bypass turbofan engine, typical on transport aircraft, acts as a centrifugal separator to force ingested materials (birds, ice, etc.) to the outside of the fan's disc. As a result, such materials go through the relatively unobstructed bypass duct, rather than through the core of the engine, which contains the smaller and more delicate compressor blades. Military aircraft designed for high-

Page 76: AEROSPACE

speed flight typically have pure turbojet, or low-bypass turbofan engines, increasing the risk that ingested materials will get into the core of the engine to cause damage.

The highest risk of the bird strike is during the takeoff and landing, in low altitudes, which is in the vicinity of the airports.

Rocket

A rocket or rocket vehicle is a missile, aircraft or other vehicle which obtains thrust by the reaction of the rocket to the ejection of fast moving fluid from a rocket engine. Chemical rockets work by the action of hot gas produced by the combustion of the propellant against the inside of combustion chambers and expansion nozzles. This generates forces that accelerate the gas to extremely high speed and exerts a large thrust on the rocket (since every action has an equal and opposite reaction).

The history of rockets goes back to at least the 13th century. By the 20th century, they have enabled human spaceflight to the Moon. In the 21st century, they have made commercial space tourism possible.

Rockets are used for fireworks and weaponry, as launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While inefficient for low speed use, they are, compared to other propulsion systems, very lightweight and powerful, capable of attaining extremely high speeds with reasonable efficiency.

Chemical rockets store a large amount of energy in an easily-released form, and can be very dangerous. However, careful design, testing, construction, and use minimizes the risks.

In antiquity

According to the writings of the Roman Aulus Gellius, in c. 400 BC, a Greek Pythagorean named Archytas propelled a wooden bird using steam. However, the only knowledge that exists of it is in Aulus's writings, which dates from 5 centuries later. No diagrams survive, and whether it was truly propelled by rocket power is unknown.

The availability of black powder (gunpowder) to propel projectiles was a precursor to the development of the first solid rocket. Ninth Century Chinese Taoist alchemists discovered black powder while searching for the Elixir of life; this accidental discovery led to experiments in the form of weapons like bombs, cannon, incendiary fire arrows and rocket-propelled fire arrows.

Exactly when the first flights of rockets occurred is contested. Some say that the first recorded use of a rocket in battle was by the Chinese in 1232 against the Mongol hordes. There were reports of fire arrows and 'iron pots' that could be heard for 5

Page 77: AEROSPACE

leagues (15 miles) when they exploded upon impact, causing devastation for a radius of 2,000 feet, apparently due to shrapnel. The lowering of the iron pots may have been a way for a besieged army to blow up invaders. The fire arrows were either arrows with explosives attached, or arrows propelled by gunpowder, such as the Korean Hwacha.

Less controversially, one of the earliest devices recorded that used internal-combustion rocket propulsion was the 'ground-rat,' a type of firework, recorded in 1264 as having frightened the Empress-Mother Kung Sheng at a feast held in her honor by her son the Emperor Lizong.

Subsequently, one of the earliest texts to mention the use of rockets was the Huolongjing, written by the Chinese artillery officer Jiao Yu in the mid-14th century. This text also mentioned the use of the first known multistage rocket, the 'fire-dragon issuing from the water' (huo long chu shui), used mostly by the Chinese navy. Frank H. Winter proposed in The Proceedings of the Twentieth and Twenty-First History Symposia of the International Academy of Astronautics that southern China and the Laotian community rocket festivals might have been key in the subsequent spread of rocketry in the Orient.

Spread of rocket technology

Rocket technology first became known to Europeans following their use by the Mongols Genghis Khan and Ögedei Khan when they conquered parts of Russia, Eastern, and Central Europe. The Mongolians had acquired the Chinese technology by conquest of the northern part of China and also by the subsequent employment of Chinese rocketry experts as mercenaries for the Mongol military. Reports of the Battle of Sejo in the year 1241 describe the use of rocket-like weapons by the Mongols against the Magyars. Rocket technology also spread to Korea, with the 15th century wheeled hwacha that would launch singijeon rockets. These first Korean rockets had an amazingly long range at the time, and were designed and built by Byun Eee-Joong. They were just like arrows but had small explosives attached to the back, and were fired in swarms.

Additionally, the spread of rockets into Europe was also influenced by the Ottomans at the siege of Constantinople in 1453, although it is very likely that the Ottomans themselves were influenced by the Mongol invasions of the previous few centuries. They appear in literature describing the capture of Baghdad in 1258 by the Mongols.

In their history of rockets published on the Internet, NASA says “the Arabs adopted the rocket into their own arms inventory and, during the Seventh Crusade, used them against the French Army of King Louis IX in 1268.".

The name Rocket comes from the Italian Rocchetta (i.e. little fuse), a name of a small firecracker created by the Italian artificer Muratori in 1379.

Page 78: AEROSPACE

"Artis Magnae Artilleriae pars prima" ("Great Art of Artillery, the First Part", also known as "The Complete Art of Artillery"), first printed in Amsterdam in 1650, was translated to French in 1651, German in 1676, English and Dutch in 1729 and Polish in 1963. For over two centuries, this work of Polish-Lithuanian Commonwealth nobleman Kazimierz Siemienowicz was used in Europe as a basic artillery manual. The book provided the standard designs for creating rockets, fireballs, and other pyrotechnic devices. It contained a large chapter on caliber, construction, production and properties of rockets (for both military and civil purposes), including multi-stage rockets, batteries of rockets, and rockets with delta wing stabilizers (instead of the common guiding rods).

In 1792, iron-cased rockets were successfully used militarily by Tipu Sultan, Ruler of the Kingdom of Mysore in India against the larger British East India Company forces during the Anglo-Mysore Wars. The British then took an active interest in the technology and developed it further during the 19th century. The major figure in the field at this time was William Congreve. From there, the use of military rockets spread throughout Europe. At the Battle of Baltimore in 1814, the rockets fired on Fort McHenry by the rocket vessel HMS Erebus were the source of the rockets' red glare described by Francis Scott Key in The Star-Spangled Banner. Rockets were also used in the Battle of Waterloo.

Accuracy of early rockets

Early rockets were very inaccurate. Without the use of spinning or any gimballing of the thrust, they had a strong tendency to veer sharply off course. The early British Congreve rockets reduced this somewhat by attaching a long stick to the end of a rocket (similar to modern bottle rockets) to make it harder for the rocket to change course. The largest of the Congreve rockets was the 32-pound (14.5 kg) Carcass, which had a 15-foot (4.6 m) stick. Originally, sticks were mounted on the side, but this was later changed to mounting in the center of the rocket, reducing drag and enabling the rocket to be more accurately fired from a segment of pipe.

The British were greatly impressed by the Mysorean Rocket artillery made from iron tubes used by the armies of Tipu Sultan and his father, Haidar Ali. Tipu Sultan championed the use of mass attacks with rocket brigades in the army. The effect of these weapons on the British during the Second, Third and Fourth Mysore Wars was sufficiently impressive to inspire William Congreve to develop his own rocket designs. Several Mysore rockets were sent to England, and after thoroughly examining the Indian specimens, from 1801, William Congreve, son of the Comptroller of the Royal Arsenal, Woolwich, London, set on a vigorous research and development programme at the Arsenal's laboratory. Congreve prepared a new propellant mixture, and developed a rocket motor with a strong iron tube with conical nose, weighing about 32 pounds (14.5 kilograms). The Royal Arsenal's first demonstration of solid fuel rockets was in 1805. The rockets were effectively used during the Napoleonic Wars and the War of 1812. Congreve published three books on rocketry.

Page 79: AEROSPACE

In 1815, Alexander Dmitrievich Zasyadko began his work on creating military gunpowder rockets. He constructed rocket-launching platforms, which allowed to fire in salvos (6 rockets at a time), and gun-laying devices. Zasyadko elaborated a tactic for military use of rocket weaponry. In 1820, Zasyadko was appointed head of the Petersburg Armory, Okhtensky Powder Factory, pyrotechnic laboratory and the first Highest Artillery School in Russia. He organized rocket production in a special rocket workshop and created the first rocket sub-unit in the Russian army.

The accuracy problem was mostly solved in 1844 when William Hale modified the rocket design so that thrust was slightly vectored, causing the rocket to spin along its axis of travel like a bullet. The Hale rocket removed the need for a rocket stick, travelled further due to reduced air resistance, and was far more accurate.

Early manned rocketry

According to legend, a manned rocket sled with 47 gunpowder-filled rockets was attempted in China by Wan Hu in the 16th Century. The alleged flight is said to have been interrupted by an explosion at the start, and the pilot did not seem to have survived (he was never found). There are no known Chinese sources for this event, and the earliest known account is an unsourced reference in a book by an American, Herbert S. Zim in 1945.

In Ottoman Turkey in 1633, Lagari Hasan Çelebi took off with what was described as a cone-shaped rocket, glided with wings through Bosporus from Topkap Palace, and made a successful landing, winning him a position in the Ottoman army. The flight was accomplished as a part of celebrations performed for the birth of Ottoman Emperor Murat IV's daughter and was rewarded by the sultan. The device was composed of a large winged cage with a conical top with 7 rockets filled with 70 kg of gunpowder. The flight was estimated to have lasted about 200 seconds and the maximum height reached around 300 metres.

Theories of interplanetary rocketry

In 1903, high school mathematics teacher Konstantin Tsiolkovsky (1857-1935) published Исследование мировых пространств реактивными приборами (The Exploration of Cosmic Space by Means of Reaction Devices), the first serious scientific work on space travel. The Tsiolkovsky rocket equation—the principle that governs rocket propulsion—is named in his honor (although it had been discovered previously). His work was essentially unknown outside the Soviet Union, where it inspired further research, experimentation and the formation of the Cosmonautics Society.

In 1920, Robert Goddard published A Method of Reaching Extreme Altitudes, the first serious work on using rockets in space travel after Tsiolkovsky. The work attracted worldwide attention and was both praised and ridiculed, particularly because of its

Page 80: AEROSPACE

suggestion that a rocket theoretically could reach the Moon. A New York Times editorial famously expressed disbelief that it was possible at all as it stated that: "after the rocket quits our air and really starts on its longer journey it will neither be accelerated nor maintained by the explosion of the charges it then might have left" and suggested that Professor Goddard actually: "does not know of the relation of action to reaction, and the need to have something better than a vacuum against which to react" and talked of "such things as intentional mistakes or oversights."

Goddard, the Times declared, apparently suggesting bad faith, "only seems to lack the knowledge ladled out daily in high schools."

After these and other scathing criticisms, Goddard began working in isolation, and avoided publicity.

Nevertheless, in Russia, Tsiolkovsky's work was republished in the 1920s in response to Russian interest raised by the work of Robert Goddard. Among other ideas, Tsiolkovsky accurately proposed to use liquid oxygen and liquid hydrogen as a nearly optimal propellant pair and determined that building staged and clustered rockets to increase the overall mass efficiency would dramatically increase range.

In 1923, Hermann Oberth (1894-1989) published Die Rakete zu den Planetenräumen ("The Rocket into Planetary Space"), a version of his doctoral thesis, after the University of Munich rejected it.

Modern rocketry

Modern rockets were born when Goddard attached a supersonic (de Laval) nozzle to a liquid fuelled rocket engine's combustion chamber. These nozzles turn the hot gas from the combustion chamber into a cooler, hypersonic, highly directed jet of gas, more than doubling the thrust and raising the engine efficiency from 2% to 64%. Early rockets had been grossly inefficient because of the thermal energy that was wasted in the exhaust gases. In 1926, Robert Goddard launched the world's first liquid-fueled rocket in Auburn, Massachusetts.

During the 1920s, a number of rocket research organizations appeared in America, Austria, Britain, Czechoslovakia, France, Italy, Germany, and Russia. In the mid-1920s, German scientists had begun experimenting with rockets which used liquid propellants capable of reaching relatively high altitudes and distances. 1927 the German car manufacturer Opel began to research with rockets together with Mark Valier and the rocket builder Friedrich Wilhelm Sander. In 1928, Fritz von Opel drove with a rocket car, the Opel RAK1 on the Opel raceway in Rüsselsheim, Germany. In 1929 von Opel started at the Frankfurt-Rebstock airport with the Opel-Sander RAK 1-airplane. This was maybe the first flight with a manned rocket-aircraft. In 1927 and also in Germany, a team of amateur rocket engineers had formed the Verein für Raumschiffahrt (German Rocket

Page 81: AEROSPACE

Society, or VfR), and in 1931 launched a liquid propellant rocket (using oxygen and gasoline).

From 1931 to 1937, the most extensive scientific work on rocket engine design occurred in Leningrad, at the Gas Dynamics Laboratory. Well-funded and staffed, over 100 experimental engines were built under the direction of Valentin Glushko. The work included regenerative cooling, hypergolic propellant ignition, and fuel injector designs that included swirling and bi-propellant mixing injectors. However, the work was curtailed by Glushko's arrest during Stalinist purges in 1938. Similar work was also done by the Austrian professor Eugen Sänger who worked on rocket powered spaceplanes such as Silbervogel (sometimes called the 'antipodal' bomber.)

On November 12, 1932 at a farm in Stockton NJ, the American Interplanetary Society's attempt to static fire their first rocket (based on German Rocket Society designs) fails in a fire.

In 1932, the Reichswehr (which in 1935 became the Wehrmacht) began to take an interest in rocketry. Artillery restrictions imposed by the Treaty of Versailles limited Germany's access to long distance weaponry. Seeing the possibility of using rockets as long-range artillery fire, the Wehrmacht initially funded the VfR team, but seeing that their focus was strictly scientific, created its own research team, with Hermann Oberth as a senior member. At the behest of military leaders, Wernher von Braun, at the time a young aspiring rocket scientist, joined the military (followed by two former VfR members) and developed long-range weapons for use in World War II by Nazi Germany, notably the A-series of rockets, which led to the infamous V-2 rocket (initially called A4).

World War II

In 1943, production of the V-2 rocket began. The V-2 had an operational range of 300 km (185 miles) and carried a 1000 kg (2204 lb) warhead, with an amatol explosive charge. Highest point of altitude of its flight trajectory is 90 km. The vehicle was only different in details from most modern rockets, with turbopumps, inertial guidance and many other features. Thousands were fired at various Allied nations, mainly England, as well as Belgium and France. While they could not be intercepted, their guidance system design and single conventional warhead meant that the V-2 was insufficiently accurate against military targets. The later versions however, were more accurate, sometimes within metres, and could be devastating. 2,754 people in England were killed, and 6,523 were wounded before the launch campaign was terminated. While the V-2 did not significantly affect the course of the war, it provided a lethal demonstration of the potential for guided rockets as weapons.

Under Projekt Amerika Nazi Germany also tried to develop and use the first submarine-launched ballistic missile (SLBMs) and the first intercontinental ballistic missiles (ICBMs) A9/A10 Amerika-Raketen to bomb New York and other American cities. The tests of

Page 82: AEROSPACE

SLBM-variants of the A4 rocket was achieved with U-boat submarines towing launch platforms. The second stage of the A9/A10 rocket was tested a few times in January, February and March 1945.

In parallel with the guided missile programme in Nazi Germany, rockets were also being used for aircraft, either for rapid horizontal take-off (JATO) or for powering the aircraft (Me 163,etc) and for vertical take-off (Bachem Ba 349 "Natter").

Post World War II

At the end of World War II, competing Russian, British, and U.S. military and scientific crews raced to capture technology and trained personnel from the German rocket program at Peenemünde. Russia and Britain had some success, but the United States benefited the most. The US captured a large number of German rocket scientists (many of whom were members of the Nazi Party, including von Braun) and brought them to the United States as part of Operation Paperclip. In America, the same rockets that were designed to rain down on Britain were used instead by scientists as research vehicles for developing the new technology further. The V-2 evolved into the American Redstone rocket, used in the early space program.

After the war, rockets were used to study high-altitude conditions, by radio telemetry of temperature and pressure of the atmosphere, detection of cosmic rays, and further research; notably for the Bell X-1 to break the sound barrier. This continued in the U.S. under von Braun and the others, who were destined to become part of the U.S. scientific complex.

Independently, research continued in the Soviet Union under the leadership of the chief designer Sergei Korolev. With the help of German technicians, the V-2 was duplicated and improved as the R-1, R-2 and R-5 missiles. German designs were abandoned in the late 1940s, and the foreign workers were sent home. A new series of engines built by Glushko and based on inventions of Aleksei Mihailovich Isaev formed the basis of the first ICBM, the R-7. The R-7 launched the first satellite, and Yuri Gagarin, the first man into space and the first lunar and planetary probes, and is still in use today. These events attracted the attention of top politicians, along with more money for further research.

Rockets became extremely important militarily in the form of modern intercontinental ballistic missiles (ICBMs) when it was realised that nuclear weapons carried on a rocket vehicle were essentially not defensible against once launched, and ICBM/Launch vehicles such as the R-7, Atlas and Titan became the delivery platform of choice for these weapons.

Fueled partly by the Cold War, the 1960s became the decade of rapid development of rocket technology particularly in the Soviet Union (Vostok, Soyuz, Proton) and in the

Page 83: AEROSPACE

United States (e.g. the X-15 and X-20 Dyna-Soar aircraft). There was also significant research in other countries, such as Britain, Japan, Australia, etc. and their growing use for Space exploration, with pictures returned from the far side of the Moon and unmanned flights for Mars exploration.

In America the manned programmes, Project Mercury, Project Gemini and later the Apollo programme culminated in 1969 with the first manned landing on the moon via the Saturn V, causing the New York Times to retract their earlier editorial implying that spaceflight couldn't work:

"Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error."

In the 1970s America made further lunar landings, before abandoning the Apollo launch vehicle. The replacement vehicle, the partially reusable 'Space Shuttle' was intended to be cheaper, but this large reduction in costs was largely not achieved. Meanwhile in 1973, the expendable Ariane programme was begun, a launcher that by the year 2000 would capture much of the geosat market.

Current day

Rockets remain a popular military weapon. The use of large battlefield rockets of the V-2 type has given way to guided missiles. However rockets are often used by helicopters and light aircraft for ground attack, being more powerful than machine guns, but without the recoil of a heavy cannon. In the 1950s there was a brief vogue for air-to-air rockets, ending with the AIR-2 'Genie' nuclear rocket, but by the early 1960s these had largely been abandoned in favor of air-to-air missiles.

Economically, rocketry is the enabler of all space technologies particularly satellites, many of which impact people's everyday lives in almost countless ways, satellite navigation, communications satellites and even things as simple as weather satellites.

Scientifically, rocketry has opened a window on our universe, allowing the launch of space probes to explore our solar system, satellites to view the Earth itself, and space-based telescopes to obtain a clearer view of the rest of the universe.

However, in the minds of much of the public, the most important use of rockets is perhaps manned spaceflight. Vehicles such as the Space Shuttle for scientific research, the Soyuz for orbital tourism and SpaceShipOne for suborbital tourism may show a trend towards greater commercialisation of manned rocketry, away from government funding, and towards more widespread access to space.

Types

Page 84: AEROSPACE

There are many different types of rockets, and a comprehensive list of the basic engine types can be found in rocket engine — the vehicles themselves range in size from tiny models such as water rockets or small solid rockets that can be purchased at a hobby store, to the enormous Saturn V used for the Apollo program, and in many different vehicle types such as rocket cars and rocket planes.

Most current rockets are chemically powered rockets (usually internal combustion engines, but some employ a decomposing monopropellant) that emit a hot exhaust gas. A chemical rocket engine can use gas propellant, solid propellant, liquid propellant, or a hybrid mixture of both solid and liquid. With combustive propellants a chemical reaction is initiated between the fuel and the oxidizer in the combustion chamber, and the resultant hot gases accelerate out of a nozzle (or nozzles) at the rearward-facing end of the rocket. The acceleration of these gases through the engine exerts force ("thrust") on the combustion chamber and nozzle, propelling the vehicle (in accordance with Newton's Third Law). See rocket engine for details.

Rockets in which the heat is supplied from a source other than a propellant, such as solar thermal rockets, can be classed as external combustion engines. Other examples of external combustion rocket engines include most designs for nuclear powered rocket engines. Use of hydrogen as the propellant for such engines gives very high exhaust velocities (around 6-10 km/s).

Steam rockets, are another example of non chemical rockets. These rockets release very hot water through a nozzle where, due to the lower pressure there, it instantly flashes to high velocity steam, propelling the rocket. The efficiency of steam as a rocket propellant is relatively low, but it is simple and reasonably safe, and the propellant is cheap and widely available. Most steam rockets have been used for propelling land-based vehicles but a small steam rocket was tested in 2004 on board the UK-DMC satellite, as an alternative, with higher performance, to cold gas thrusters for attitude jets. There are even proposals to use steam rockets for interplanetary transport using either nuclear or solar heating as the power source to vaporize water collected from around the solar system, at system costs that are claimed to be greatly lower than hydrogen-based systems.

Uses

Rockets or other similar reaction devices carrying their own propellant must be used when there is no other substance (land, water, or air) or force (gravity, magnetism, light) that a vehicle may usefully employ for propulsion, such as in space. In these circumstances, it is necessary to carry all the propellant to be used.

Page 85: AEROSPACE

However, they are also useful in other situations:

Weaponry

In many military weapons, rockets are used to propel payloads to their targets. A rocket and its payload together are generally referred to as a missile, especially when the weapon has a guidance system.

Science

Sounding rockets are commonly used to carry instruments that take readings from 50 kilometers (30 mi) to 1,500 kilometers (930 mi) above the surface of the Earth, the altitudes between those reachable by weather balloons and satellites.

Launch

Due to their high exhaust velocity (Mach ~10+), rockets are particularly useful when very high speeds are required, such as orbital speed (Mach 25+). Indeed, rockets remain the only way to launch spacecraft into orbit. They are also used to rapidly accelerate spacecraft when they change orbits or de-orbit for landing. Also, a rocket may be used to soften a hard parachute landing immediately before touchdown (see Soyuz spacecraft). Spacecraft delivered into orbital trajectories become artificial satellites.

Hobby, sport and entertainment

Hobbyists build and fly Model rockets of various types and rockets are used to launch both commercially available fireworks and professional fireworks displays.

Hydrogen peroxide rockets are used to power jet packs, and have been used to power cars and a rocket car holds the all time drag racing record.

Components of a rocket

Rockets at minimum have a place to put propellant (such as a propellant tank), one or more rocket engines and nozzle, directional stabilization device(s) (such as fins, attitude jets or engine gimbals) and a structure (typically monocoque) to hold these components together. Rockets intended for high speed atmospheric use also have an aerodynamic fairing such as a nose cone.

As well as these components, rockets can have any number of other components, such as wings (rocketplanes), wheels (rocket cars), even, in a sense, a person (rocket belt).

Noise

Page 86: AEROSPACE

For all but the very smallest sizes, rocket exhaust compared to other engines is generally very noisy. As the hypersonic exhaust mixes with the ambient air, shock waves are formed. The sound intensity from these shock waves depends on the size of the rocket. The sound intensity of large rockets could potentially kill at close range.

The Space Shuttle generates over 200 dB(A) of noise around its base. A Saturn V launch was detectable on seismometers a considerable distance from the launch site.

Generally speaking, noise is most intense when a rocket is close to the ground, since the noise from the engines radiates up away from the plume, as well as reflecting off the ground. This noise can be reduced somewhat by flame trenches with roofs, by water injection around the plume and by deflecting the plume at an angle.

For manned rockets various methods are used to reduce the sound intensity for the passengers as much as possible, and typically the placement of the astronauts far away from the rocket engines helps significantly. For the passengers and crew, when a vehicle goes supersonic the sound cuts off as the sound waves are no longer able to keep up with the vehicle.

Physics

Operation

In all rockets, the exhaust is formed from propellants carried within the rocket prior to use. Rocket thrust is due to the rocket engine, which propels the rocket forwards by exhausting the propellant rearwards at extreme high speed.

In a closed chamber, the pressures are equal in each direction and no acceleration occurs. If an opening is provided at the bottom of the chamber then the pressure is no longer acting on that side. The remaining pressures give a resultant thrust on the side opposite the opening; as well as permitting exhaust to escape. Using a nozzle increases the forces further, in fact multiplies the thrust as a function of the area ratio of the nozzle, since the pressures also act on the nozzle. As a side effect the pressures act on the exhaust in the opposite direction and accelerate this to very high speeds (in accordance with Newton's Third Law).

If propellant gas is continuously added to the chamber then this disequilibrium of pressures can be maintained for as long as propellant remains.

It turns out (from conservation of momentum) that the speed of the exhaust of a rocket determines how much momentum increase is created for a given amount of propellant, and this is termed a rocket's specific impulse.

Page 87: AEROSPACE

As the remaining propellant decreases, the vehicle's becomes lighter and acceleration tends to increase until eventually it runs out of propellant, and this means that much of the speed change occurs towards the end of the burn when the vehicle is much lighter.

Forces on a rocket in flight

The general study of the forces on a rocket or other spacecraft is called astrodynamics.

Flying rockets are primarily affected by the following:

Thrust from the engine(s) Gravity from celestial bodies Drag if moving in the atmosphere Lift; usually relatively small effect except for rocket-powered aircraft

In addition, the inertia/centrifugal pseudo-force can be significant due to the path of the rocket around the center of a celestial body; when high enough speeds in the right direction and altitude are achieved a stable orbit or escape velocity is obtained.

During a rocket launch, there is a point of maximum aerodynamic drag called Max Q. This determines the minimum aerodynamic strength of the vehicle.

These forces, with a stabilizing tail present will, unless deliberate control efforts are made, to naturally cause the vehicle to follow a trajectory termed a gravity turn, and this trajectory is often used at least during the initial part of a launch. This means that the vehicle can maintain low or even zero angle of attack. This minimizes transverse stress on the launch vehicle; allowing for a weaker, and thus lighter, launch vehicle.

Net thrust

The thrust of a rocket is often deliberately varied over a flight, to provide a way to control the airspeed of the vehicle so as to minimize aerodynamic losses but also so as to limit g-forces that would otherwise occur during the flight as the propellant mass decreases, which could damage the vehicle, crew or payload.

Below is an approximate equation for calculating the gross thrust of a rocket:

where:

propellant flow (kg/s or lb/s)jet velocity at nozzle exit plane (m/s or s)flow area at nozzle exit plane (m2 or ft2)

Page 88: AEROSPACE

static pressure at nozzle exit plane (Pa or lb/ft2)ambient (or atmospheric) pressure (Pa or lb/ft2)

Since, unlike a jet engine, a conventional rocket motor lacks an air intake, there is no 'ram drag' to deduct from the gross thrust. Consequently the net thrust of a rocket motor is equal to the gross thrust.

The term represents the momentum thrust, which remains constant at a given

throttle setting, whereas the term represents the pressure thrust term. At full throttle, the net thrust of a rocket motor improves slightly with increasing altitude, because the reducing atmospheric pressure increases the pressure thrust term.

Specific impulse

As can be seen from the thrust equation the effective speed of the exhaust, Ve, has a large impact on the amount of thrust produced from a particular quantity of fuel burnt per second. The thrust-seconds (impulse) per unit of propellant is called Specific Impulse (Isp) or effective exhaust velocity and this is one of the most important figures that describes a rocket's performance.

Vacuum Isp

Due to the specific impulse varying with pressure, a quantity that is easy to compare and calculate with is useful. Because rockets choke at the throat, and because the supersonic exhaust prevents external pressure influences travelling upstream, it turns out that the pressure at the exit is ideally exactly proportional to the propellant flow , provided the mixture ratios and combustion efficiencies are maintained. It is thus quite usual to rearrange the above equation slightly:

and so define the vacuum Isp to be:

Vevac = Cfc *

Where:

the speed of sound constant at the throatthe thrust coefficient constant of the nozzle (typically between 0.8 and

1.9)

And hence:

Page 89: AEROSPACE

Delta-v (rocket equation)

The delta-v capacity of a rocket is the theoretical total change in velocity that a rocket can achieve without any external interference (without air drag or gravity or other forces).

The delta-v that a rocket vehicle can provide can be calculated from the Tsiolkovsky rocket equation:

where:

m0 is the initial total mass, including propellant, in kg (or lb)m1 is the final total mass in kg (or lb)ve is the effective exhaust velocity in m/s or (ft/s) or

is the delta-v in m/s (or ft/s)

Delta-v can also be calculated for a particular manoeuvre; for example the delta-v to launch from the surface of the Earth to Low earth orbit is about 9.7 km/s, which leaves the vehicle with a sideways speed of about 7.8 km/s at an altitude of around 200 km. In this manoeuvre about 1.9 km/s is lost in air drag, gravity drag and gaining altitude.

Mass ratios

Mass ratio is the ratio between the initial fuelled mass and the mass after the 'burn'. Everything else being equal, a high mass ratio is desirable for good performance, since it indicates that the rocket is lightweight and hence performs better, for essentially the same reasons that low weight is desirable in sports cars.

Rockets as a group have the highest thrust-to-weight ratio of any type of engine; and this helps vehicles achieve high mass ratios, which improves the performance of flights. The higher this ratio, the less engine mass is needed to be carried and permits the carrying of even more propellant, this enormously improves performance.

Achievable mass ratios are highly dependent on many factors such as propellant type, the design of engine the vehicle uses, structural safety margins and construction techniques.

Vehicle Takeoff Mass Final Mass Mass ratio

Mass fraction

Ariane 5 (vehicle + 746,000 kg 2,700 kg + 16,000 39.9 0.975

Page 90: AEROSPACE

payload) kg

Titan 23G first stage 258,000 lb (117,020 kg)

10,500 lb (4,760 kg)

24.6 0.959

Saturn V 3,038,500 kg 13,300 kg + 118,000 kg

23.1 0.957

Space Shuttle (vehicle + payload)

2,040,000 kg 104,000 kg + 28,800 kg

15.4 0.935

Saturn 1B (stage only) 448,648 kg 41,594 kg 10.7 0.907V2 12.8 ton (13000 kg) 3.85 0.74

X-15 34,000 lb (15,420 kg)

14,600 lb (6,620 kg)

2.3 0.57

Concorde 400,000 lb 2 0.5747 800,000 lb 2 0.5

Staging

Often, the required velocity (delta-v) for a mission is unattainable by any single rocket because the propellant, tankage, structure, guidance, valves and engines and so on, take a particular minimum percentage of take-off mass.

The mass ratios that can be achieved with a single set of fixed rocket engines and tankage varies depends on acceleration required, construction materials, tank layout, engine type and propellants used, but for example the first stage of the Saturn V, carrying the weight of the upper stages, was able to achieve a mass ratio of about 10.

This problem is frequently solved by staging — the rocket sheds excess weight (usually empty tankage and associated engines) during launch to reduce its weight and effectively increase its mass ratio. Staging is either serial where the rockets light after the previous stage has fallen away, or parallel, where rockets are burning together and then detach when they burn out.

Typically, the acceleration of a rocket increases with time (if the thrust stays the same) as the weight of the rocket decreases as propellant is burned. Discontinuities in acceleration will occur when stages burn out, often starting at a lower acceleration with each new stage firing.

Energy efficiency

Rocket launch vehicles take-off with a great deal of flames, noise and drama, and it might seem obvious that they are grievously inefficient. However while they are far from perfect, their energy efficiency is not as bad as might be supposed.

Page 91: AEROSPACE

The energy density of rocket propellant is around 1/3 that of conventional hydrocarbon fuels; the bulk of the mass is in the form of (often relatively inexpensive) oxidiser. Nevertheless, at take-off the rocket has a great deal of energy in the form of fuel and oxidiser stored within the vehicle, and it is of course desirable that as much of the energy stored in the propellant ends up as kinetic or potential energy of the body of the rocket as possible.

Energy from the fuel is lost in air drag and gravity drag and is used to gain altitude. However, much of the lost energy ends up in the exhaust.

100% efficiency within the engine (ηc) would mean that all of the heat energy of the combustion products is converted into kinetic energy of the jet. This is not possible, but the high expansion ratio nozzles that can be used with rockets come surprisingly close: when the nozzle expands the gas, the gas is cooled and accelerated, and an energy efficiency of up to 70% can be achieved. Most of the rest is heat energy in the exhaust that is not recovered. This compares very well with other engine designs. The high efficiency is a consequence of the fact that rocket combustion can be performed at very high temperatures and the gas is finally released at much lower temperatures, and so giving good Carnot efficiency.

However, engine efficiency is not the whole story. In common with many jet-based engines, but particularly in rockets due to their high and typically fixed exhaust speeds, rocket vehicles are extremely inefficient at low speeds irrespective of the engine efficiency. The problem is that at low speeds, the exhaust carries away a huge amount of kinetic energy rearward. This phenomenon is termed propulsive efficiency (ηp).

However, as speeds rise, the resultant exhaust speed goes down, and the overall vehicle energetic efficiency rises, reaching a peak of around 100% of the engine efficiency when the vehicle is travelling exactly at the same speed that the exhaust is emitted. In this case the exhaust would ideally stop dead in space behind the moving vehicle, taking away zero energy, and from conservation of energy, all the energy would end up in the vehicle. The efficiency then drops off again at even higher speeds as the exhaust ends up travelling forwards behind the vehicle.

From these principles it can be shown that the propulsive efficiency ηp for a rocket moving at speed u with an exhaust velocity c is:

And the overall energy efficiency η is:

η = ηpηc

Page 92: AEROSPACE

Since the energy ultimately comes from fuel, these joint considerations mean that rockets are mainly useful when a very high speed is required, such as ICBMs or orbital launch, and they are rarely if ever used for general aviation. For example, from the equation, with an ηc of 0.7, a rocket flying at Mach 0.85 (which most aircraft cruise at) with an exhaust velocity of Mach 10, would have a predicted overall energy efficiency of 5.9%, whereas a conventional, modern, air breathing jet engine achieves closer to 30% or more efficiency. Thus a rocket would need about 5x more energy; and allowing for the ~3x lower specific energy of rocket propellant than conventional air fuel, roughly 15x more mass of propellant would need to be carried for the same journey.

Thus jet engines which have a better match between speed and jet exhaust speed such as turbofans (in spite of their worse ηc) dominate for subsonic and supersonic atmospheric use while rockets work best at hypersonic speeds. On the other hand rockets do also see many short-range relatively low speed military applications where their low-speed inefficiency is outweighed by their extremely high thrust and hence high accelerations.

Safety, reliability and accidents

Rockets are not inherently highly dangerous. In military usage quite adequate reliability is obtained.

Because of the enormous chemical energy in all useful rocket propellants (greater energy per weight than explosives, but lower than gasoline), accidents can and have happened. The number of people injured or killed is usually small because of the great care typically taken, but this record is not perfect.

Spacecraft propulsion

Spacecraft propulsion is any method used to change the velocity of spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by exhausting a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine.

All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are

Page 93: AEROSPACE

starting to use them for north-south stationkeeping. Interplanetary vehicles mostly use chemical rockets as well, although a few have experimentally used ion thrusters (a form of electric propulsion) to great success.

The necessity for propulsion system

Artificial satellites must be launched into orbit, and once there they must be placed in their nominal orbit. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections (orbital stationkeeping). Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. When a satellite has exhausted its ability to adjust its orbit, its useful life is over.

Spacecraft designed to travel further also need propulsion methods. They need to be launched out of the Earth's atmosphere just as satellites do. Once there, they need to leave orbit and move around.

For interplanetary travel, a spacecraft must use its engines to leave Earth orbit. Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term trajectory adjustments. In between these adjustments, the spacecraft simply falls freely along its orbit. The simplest fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking are sometimes used for this final orbital adjustment.

Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun.

Spacecraft for interstellar travel also need propulsion methods. No such spacecraft has yet been built, but many designs have been discussed. Since interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a

Page 94: AEROSPACE

reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival will be a formidable challenge for spacecraft designers.

Effectiveness of propulsion systems

When in space, the purpose of a propulsion system is to change the velocity, or v, of a spacecraft. Since this is more difficult for more massive spacecraft, designers generally discuss momentum, mv. The amount of change in momentum is called impulse. So the goal of a propulsion method in space is to create an impulse.

When launching a spacecraft from the Earth, a propulsion method must overcome a higher gravitational pull to provide a net positive acceleration. In orbit, any additional impulse, even very tiny, will result in a change in the orbit path.

The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used.

The Earth's surface is situated fairly deep in a gravity well and it takes a velocity of 11.2 kilometers/second (escape velocity) or more to escape from it. As human beings evolved in a gravitational field of 1g (9.8 m/s²), an ideal propulsion system would be one that provides a continuous acceleration of 1g (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from all the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones.

The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass.

In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v

Page 95: AEROSPACE

is mv. But this particle has kinetic energy mv²/2, which must come from somewhere. In a conventional solid, liquid, or hybrid rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor), while the ions provide the reaction mass.

When discussing the efficiency of a propulsion system, designers often focus on effectively using the reaction mass. Reaction mass must be carried along with the rocket and is irretrievably consumed when used. One way of measuring the amount of impulse that can be obtained from a fixed amount of reaction mass is the specific impulse, the impulse per unit weight-on-Earth (typically designated by Isp). The unit for this value is seconds. Since the weight on Earth of the reaction mass is often unimportant when discussing vehicles in space, specific impulse can also be discussed in terms of impulse per unit mass. This alternate form of specific impulse uses the same units as velocity (e.g. m/s), and in fact it is equal to the effective exhaust velocity of the engine (typically designated ve). Confusingly, both values are sometimes called specific impulse. The two values differ by a factor of gn, the standard acceleration due to gravity 9.80665 m/s² (Ispgn = ve).

A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the energy required for that impulse is proportional to the square of the exhaust velocity, so that more mass-efficient engines require much more energy, and are typically less energy efficient. This is a problem if the engine is to provide a large amount of thrust. To generate a large amount of impulse per second, it must use a large amount of energy per second. So highly (mass) efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-efficiency engine designs also provide very low thrust.

Delta-v and propellant use

Burning the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed 'delta-v' (Δv).

If the exhaust velocity is constant then the total Δv of a vehicle can be calculated using the rocket equation, where M is the mass of propellant, P is the mass of the payload (including the rocket structure), and ve is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation:

Page 96: AEROSPACE

For historical reasons, as discussed above, ve is sometimes written as

ve = Ispgo

where Isp is the specific impulse of the rocket, measured in seconds, and go is the gravitational acceleration at sea level.

For a high delta-v mission, the majority of the spacecraft's mass needs to be reaction mass. Since a rocket must carry all of its reaction mass, most of the initially-expended reaction mass goes towards accelerating reaction mass rather than payload. If the rocket has a payload of mass P, the spacecraft needs to change its velocity by Δv, and the rocket engine has exhaust velocity ve, then the mass M of reaction mass which is needed can be calculated using the rocket equation and the formula for Isp:

For Δv much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If Δv is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass.

For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example a launch mission to low Earth orbit requires about 9.3-10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer.

Power use and propulsive efficiency

Although solar power and nuclear power are virtually unlimited sources of energy, the maximum power they can supply is substantially proportional to the mass of the powerplant. For fixed power, with a large ve which is desirable to save propellant mass, it turns out that the maximum acceleration is inversely proportional to ve. Hence the time to reach a required delta-v is proportional to ve. Thus the latter should not be too large. It might be thought that adding power generation is helpful, however this takes mass away from payload, and ultimately reaches a limit as the payload fraction tends to zero.

For all reaction engines (such as rockets and ion drives) some energy must go into accelerating the reaction mass. Every engine will waste some energy, but even assuming

Page 97: AEROSPACE

100% efficiency, to accelerate a particular mass of exhaust the engine will need energy amounting to

which is simply the energy needed to accelerate the exhaust. This energy is not necessarily lost- some of it usually ends up as kinetic energy of the vehicle, and the rest is wasted in residual motion of the exhaust.

Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle - some of it, indeed usually most of it, ends up as kinetic energy of the exhaust.

The exact amount depends on the design of the vehicle, and the mission. However there are some useful fixed points:

if the Isp is fixed, for a mission delta-v, there is a particular Isp that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see the energy computed from the rocket equation). Drives with a specific impulse that is both high and fixed such as Ion thrusters have exhaust velocities that can be enormously higher than this ideal for many missions.

if the exhaust velocity can be made to vary so that at each instant it is equal and opposite to the vehicle velocity then the absolute minimum energy usage is achieved. When this is achieved, the exhaust stops in space ^ and has no kinetic energy; and the propulsive efficiency is 100%- all the energy ends up in the vehicle (in principle such a drive would be 100% efficient, in practice there would be thermal losses from within the drive system and residual heat in the exhaust). However in most cases this uses an impractical quantity of propellant, but is a useful theoretical consideration. Another complication is that unless the vehicle is moving initially, it cannot accelerate, as the exhaust velocity is zero at zero speed.

Some drives (such as VASIMR or Electrodeless plasma thruster ) actually can significantly vary their exhaust velocity. This can help reduce propellant usage or improve acceleration at different stages of the flight. However the best energetic performance and acceleration is still obtained when the exhaust velocity is close to the vehicle speed. Proposed ion and plasma drives usually have exhaust velocities enormously higher than that ideal (in the case of VASIMR the lowest quoted speed is around 15000 m/s compared to a mission delta-v from high Earth orbit to Mars of about 4000m/s).

Example

Page 98: AEROSPACE

Suppose we want to send a 10,000 kg space probe to Mars. The required Δv from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. (A manned craft would need to take a faster route and use more fuel). For the sake of argument, let us say that the following thrusters may be used:

Engine

Effective Exhaust Velocity(km/s)

Specific impulse

(s)

Fuel mass(kg)

Energy required

(GJ)

Energy per kgof

propellant

minimum power/thrust

Power generator

mass/thrust*

Solid rocket 1 100 190,000 95 500 kJ 0.5 kW/N N/ABipropellant rocket

5 500 8,200 103 12.6 MJ 2.5 kW/N N/A

Ion thruster 50 5,000 620 775 1.25 GJ 25 kW/N 25 kg/NAdvance electrically powered drive

1,000 100,000 30 15,000 500 GJ 500 kW/N 500 kg/N

- assumes a specific power of 1kW

Observe that the more fuel-efficient engines can use far less fuel; its mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, note also that these require a large total amount of energy. For Earth launch, engines require a thrust to weight ratio of more than unity. To do this they would have to be supplied with Gigawatts of power — equivalent to a major metropolitan generating station. From the table it can be seen that this is clearly impractical with current power sources.

Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, and would be insufficient for launching from the Earth but in orbit, where there is no friction, over long periods the velocity will be finally achieved. For example. it took the Smart 1 more than a year to reach the Moon, while with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost.

Mission planning frequently involves adjusting and choosing the propulsion system according to the mission delta-v needs, so as to minimise the total cost of the project, including trading off greater or lesser use of fuel and launch costs of the complete vehicle.

Space propulsion methods

Page 99: AEROSPACE

Propulsion methods can be classified based on their means of accelerating the reaction mass. There are also some special methods for launches, planetary arrivals, and landings.

Reaction engines

Rocket engines

Most rocket engines are internal combustion heat engines (although non combusting forms exist). Rocket engines generally produce a high temperature reaction mass, as a hot gas. This is achieved by combusting a solid, liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio nozzle. This bell-shaped nozzle is what gives a rocket engine its characteristic shape. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. Exhaust speeds as high as 10 times the speed of sound at sea level are common.

Ion propulsion rockets can heat a plasma or charged gas inside a magnetic bottle and release it via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods, some of which have been proposed to be used in propulsion systems, and some have been tested in a lab.

Electromagnetic acceleration of reaction mass

Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine very typically uses electric power, first to ionise atoms, and then uses a voltage gradient to accelerate the ions to high exhaust velocities.

For these drives, at the highest exhaust speeds, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical power sources provide low thrust, but use hardly any fuel.

For some missions, particularly reasonably close to the Sun, solar energy may be sufficient, and has very often been used, but for others further out or at higher power, nuclear energy is necessary; engines drawing their power from a nuclear source are called nuclear electric rockets.

With any current source of electrical power, chemical, nuclear or solar, the maximum amount of power that can be generated limits the amount of thrust that can be

Page 100: AEROSPACE

produced to a small value. Power generation adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle.

Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft shows some potential. However, the dissipation of waste heat from any power plant may make any propulsion system requiring a separate power source infeasible for interstellar travel.

Some electromagnetic methods:

Ion thrusters (accelerate ions first and later neutralize the ion beam with an electron stream emitted from a cathode called a neutralizer)

o Electrostatic ion thrustero Field Emission Electric Propulsiono Hall effect thrustero Colloid thruster

Plasma thrusters (where both ions and electrons are accelerated simultaneously, no neutralizer is required)

o Magnetoplasmadynamic thrustero Helicon Double Layer Thrustero Electrodeless plasma thrustero Pulsed plasma thrustero Pulsed inductive thrustero Variable specific impulse magnetoplasma rocket (VASIMR)

Mass drivers (for propulsion)

Systems without reaction mass carried within the spacecraft

The law of conservation of momentum states that any engine which uses no reaction mass cannot move the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System; there are gravitation fields, magnetic fields, solar wind and solar radiation. Various propulsion methods try to take advantage of these. However, since these phenomena are diffuse in nature, corresponding propulsion structures need to be proportionately large.

There are several different space drives that need little or no reaction mass to function. A tether propulsion system employs a long cable with a high tensile strength to change a spacecraft's orbit, such as by interaction with a planet's magnetic field or through momentum exchange with another object. Solar sails rely on radiation pressure from electromagnetic energy, but they require a large collection surface to function effectively. The magnetic sail deflects charged particles from the solar wind with a

Page 101: AEROSPACE

magnetic field, thereby imparting momentum to the spacecraft. A variant is the mini-magnetospheric plasma propulsion system, which uses a small cloud of plasma held in a magnetic field to deflect the Sun's charged particles.

For changing the orientation of a satellite or other space vehicle, conservation of angular momentum does not pose a similar constraint. Thus many satellites use momentum wheels to control their orientations. These cannot be the only system for controlling satellite orientation, as the angular momentum built up due to torques from external forces such as solar, magnetic, or tidal forces eventually needs to be "bled off" using a secondary system.

Gravitational slingshots can also be used to carry a probe onward to other destinations.

Planetary and atmospheric spacecraft propulsion

Launch mechanisms

High thrust is of vital importance for Earth launch, thrust has to be greater than weight Many of the propulsion methods above give a thrust/weight ratio of much less than 1, and so cannot be used for launch.

All current spacecraft use chemical rocket engines (bipropellant or solid-fuel) for launch. Other power sources such as nuclear have been proposed, and tested, but safety, environmental and political considerations have so far curtailed their use.

One advantage that spacecraft have in launch is the availability of infrastructure on the ground to assist them. Proposed non-rocket spacelaunch ground-assisted launch mechanisms include:

Space elevator (a geostationary tether to orbit) Launch loop (a very fast rotating loop about 80km tall) Space fountain (a very tall building held up by a stream of masses fired from

base) Orbital ring (a ring around the Earth with spokes hanging down off bearings) Hypersonic skyhook (a fast spinning orbital tether) Electromagnetic catapult (railgun, coilgun) (an electric gun) Space gun (Project HARP, ram accelerator) (a chemically powered gun) Laser propulsion (Lightcraft) (rockets powered from ground-based lasers)

Airbreathing engines for orbital launch

Studies generally show that conventional air-breathing engines, such as ramjets or turbojets are basically too heavy (have too low a thrust/weight ratio) to give any significant performance improvement when installed on a launch vehicle itself.

Page 102: AEROSPACE

However, launch vehicles can be air launched from separate lift vehicles (e.g. B-29, Pegasus Rocket and White Knight) which do use such propulsion systems.

On the other hand, very lightweight or very high speed engines have been proposed that take advantage of the air during ascent:

SABRE - a lightweight hydrogen fuelled turbojet with precooler ATREX - a lightweight hydrogen fuelled turbojet with precooler Liquid air cycle engine - a hydrogen fuelled jet engine that liquifies the air before

burning it in a rocket engine Scramjet - jet engines that use supersonic combustion

Normal rocket launch vehicles fly almost vertically before rolling over at an altitude of some tens of kilometers before burning sideways for orbit; this initial vertical climb wastes propellant but is optimal as it greatly reduces airdrag. Airbreathing engines burn propellant much more efficiently and this would permit a far flatter launch trajectory, the vehicles would typically fly approximately tangentially to the earth surface until leaving the atmosphere then perform a rocket burn to bridge the final delta-v to orbital velocity.

Planetary arrival and landing

When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using all the methods listed above (provided they can generate a high enough thrust), but there are a few methods that can take advantage of planetary atmospheres and/or surfaces.

Aerobraking allows a spacecraft to reduce the high point of an elliptical orbit by repeated brushes with the atmosphere at the low point of the orbit. This can save a considerable amount of fuel since it takes much less delta-V to enter an elliptical orbit compared to a low circular orbit. Since the braking is done over the course of many orbits, heating is comparatively minor, and a heat shield is not required. This has been done on several Mars missions such as Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter, and at least one Venus mission, Magellan.

Aerocapture is a much more aggressive manoeuver, converting an incoming hyperbolic orbit to an elliptical orbit in one pass. This requires a heat shield and much trickier navigation, since it must be completed in one pass through the atmosphere, and unlike aerobraking no preview of the atmosphere is possible. If the intent is to remain in orbit, then at least one more propulsive maneuver is required after aerocapture—otherwise the low point of the resulting orbit will remain in the atmosphere, resulting in eventual re-entry. Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, since they turned a hyperbolic

Page 103: AEROSPACE

orbit into an elliptical orbit. On these missions, since there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee.

Parachutes can land a probe on a planet with an atmosphere, usually after the atmosphere has scrubbed off most of the velocity, using a heat shield.

Airbags can soften the final landing. Lithobraking, or stopping by simply smashing into the target, is usually done by

accident. However, it may be done deliberately with the probe expected to survive (see, for example, Deep Space 2). Very sturdy probes and low approach velocities are required.

Proposed spacecraft methods that may violate the laws of physics

In addition, a variety of hypothetical propulsion techniques have been considered that would require entirely new principles of physics to realize and that may not actually be possible. To date, such methods are highly speculative and include:

Diametric drive Pitch drive Bias drive Disjunction drive Alcubierre drive (a form of Warp drive) Differential sail Wormholes - theoretically possible, but impossible in practice with current

technology Antigravity - requires the concept of antigravity; theoretically impossible Reactionless drives - breaks the law of conservation of momentum; theoretically

impossible EmDrive - tries to circumvent the law of conservation of momentum; may be

theoretically impossible A "hyperspace" drive based upon Heim theory

Table of methods and their specific impulse

Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods.

Four numbers are shown. The first is the effective exhaust velocity: the equivalent speed that the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method, thrust and power consumption and other factors can be, however:

if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above)

Page 104: AEROSPACE

if it is much more than the delta-v, then, proportionally more energy is needed; if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time

The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period. (This result does not apply when the object is significantly influenced by gravity.)

The fourth is the maximum delta-v this technique can give (without staging). For rocket-like propulsion systems this is a function of mass fraction and exhaust velocity. Mass fraction for rocket-like systems is usually limited by propulsion system weight and tankage weight. For a system to achieve this limit, typically the payload may need to be a negligible percentage of the vehicle, and so the practical limit on some systems can be much lower.

Propulsion methods

Method

Effective Exhaust Velocity(km/s)

Thrust(N)

Firing DurationMaximum

Delta-v (km/s)

Propulsion methods in current use

Solid rocket 1 - 4 103 - 107 minutes ~ 7Hybrid rocket 1.5 - 4.2 <0.1 - 107 minutes > 3

Monopropellant rocket 1 - 3 0.1 - 100 milliseconds - minutes

~ 3

Bipropellant rocket 1 - 4.7 0.1 - 107 minutes ~ 9Tripropellant rocket 2.5 - 5.3 minutes ~ 9

Resistojet rocket 2 - 6 10-2 - 10 minutesArcjet rocket 4 - 16 10-2 - 10 minutes

Hall effect thruster (HET) 8 - 50 10-3 - 10 months/years > 100Electrostatic ion thruster 15 - 80 10-3 - 10 months/years > 100

Field Emission Electric Propulsion (FEEP)

100 - 130 10-6 - 10-3 months/years

Pulsed plasma thruster (PPT) ~ 20 ~ 0.1 ~ 2,000 - ~ 10,000 hours

Pulsed inductive thruster (PIT) 50 20 monthsNuclear electric rocket As electric propulsion method used

Currently feasible propulsion

Page 105: AEROSPACE

methods

Solar sails N/A 9 per km²(at 1 AU)

Indefinite > 40

Tether propulsion N/A 1 - 1012 minutes ~ 7Mass drivers (for propulsion) 30 - ? 104 - 108 months

Launch loop N/A ~104 minutes >> 11Orion Project (Near term nuclear pulse propulsion)

20 - 100 109 - 1012 several days ~30-60

Magnetic field oscillating amplified thruster

10 - 130 0,1 - 1 days - months > 100

Variable specific impulse magnetoplasma rocket

(VASIMR)10 - 300

40 - 1,200 days - months > 100

Magnetoplasmadynamic thruster (MPD)

20 - 100 100 weeks

Nuclear thermal rocket 9 105 minutes > ~ 20Solar thermal rocket 7 - 12 1 - 100 weeks > ~ 20Radioisotope rocket 7 - 8 months

Air-augmented rocket 5 - 6 0.1 - 107 seconds-minutes

> 7?

Liquid air cycle engine 4.5 1000 - 107

seconds-minutes

?

SABRE 30/4.5 0.1 - 107 minutes 9.4Dual mode propulsion rocket

Technologies requiring further research

Magnetic sails N/A Indefinite IndefiniteMini-magnetospheric plasma

propulsion200 ~1 N/kW months

Nuclear pulse propulsion (Project Daedalus' drive)

20 - 1,000 109 - 1012 years ~15,000

Gas core reactor rocket 10 - 20 10³ - 106

Nuclear salt-water rocket 100 10³ - 107 half hour

Beam-powered propulsion As propulsion method powered by beam

Fission sailFission-fragment rocket 1,000Nuclear photonic rocket 300,000 10-5 - 1 years-decades

Page 106: AEROSPACE

Fusion rocket 100 - 1,000Space Elevator N/A N/A Indefinite > 12

Significantly beyond current engineering

Antimatter catalyzed nuclear pulse propulsion

200 - 4,000 days-weeks

Antimatter rocket 10,000 - 100,000

Bussard ramjet 2.2 - 20,000 indefinite ~30,000Gravitoelectromagnetic toroidal

launchers<300,000

Testing spacecraft propulsion

Spacecraft propulsion systems are often first statically tested on the Earth's surface, within the atmosphere but many systems require a vacuum chamber to test fully. Rockets are usually tested at a rocket engine test facility well away from habitation and other buildings for safety reasons. Ion drives are far less dangerous and require much less stringent safety, usually only a large-ish vacuum chamber is needed.

Famous static test locations can be found at Rocket Ground Test Facilities

Some systems cannot be adequately tested on the ground and test launches may be employed at a Rocket Launch Site.

Electric propulsion

Electric propulsion is a form of spacecraft propulsion used in outer space. This type of rocket-like reaction engine utilize electric energy to obtain thrust from propellant carried with the vehicle. Unlike rocket engines these kinds of engines do not necessarily have rocket nozzles, and thus many types are not considered true rockets.

While electric thrusters typically offer much higher specific impulse, due to electrical power constraints thrust is weaker compared to chemical thrusters by several orders of magnitude.

The idea of electric propulsion dates back to 1906, when Robert Goddard considered the possibility in his personal notebook. Konstantin Tsiolkovsky published the idea in 1911.

Types of electric propulsion

Page 107: AEROSPACE

The various technologies of electric propulsion for spacecraft are usually grouped in three families based on the type of force used to accelerate the ions of the plasma.

Electric propulsion systems can also be characterized in terms of their operation in either steady (continuous firing for a prescribed duration) or unsteady (pulsed firings accumulating to a desired thrust bit).

Electrostatic

If the acceleration is caused mainly by the Coulomb Force (i.e application of a static electric field in the direction of the acceleration) the device is considered electrostatic.

Ion thruster Hall effect thruster Field Emission Electric Propulsion Colloid thruster

Electrothermal

The electrothermal category groups the devices where electromagnetic fields are used to generate a plasma to increase the heat of the bulk propellant. The thermal energy imparted to the propellant gas is then converted into kinetic energy by a nozzle of either physical material construction or by magnetic means. Low molecular weight gases are preferred propellants (e.g. hydrogen, helium, ammonia) for this kind of system.

Performance of electrothermal systems in terms of specific impulse (Isp) is somewhat modest (500 to ~1000 seconds), but exceeds that of cold gas thrusters, monopropellant thrusters, and even most bi-propellant thrusters. In the USSR electrothermal engines were used since 1971, Soviet "Meteor-3", "Meteor-Priroda", "Resurs-O" satellite series and Russian "Elektro" satellite are equipped with them. Electrothermal systems by Aerojet (MR-510) are currently used on Lockheed-Martin A2100 satellites using hydrazine as a propellant.

DC arcjet microwave arcjet VASIMR

Electromagnetic

If the ions are accelerated either by the Lorentz Force or by the effect of an electromagnetic fields where the electric field is not in the direction of the acceleration, the device is considered electromagnetic.

MPD thruster

Page 108: AEROSPACE

Electrodeless plasma thruster Pulsed inductive thruster ******************************************************************

Reliability engineering Reliability engineeringReliability engineering is an engineering field, that deals with the study of reliability: the ability of a system or component to perform its required functions under stated conditions for a specified period of time. It is often reported in terms of a probability.Overview

A Reliability Block Diagram

Reliability may be defined in several ways:

The idea that something is fit for purpose with respect to time; The capacity of a device or system to perform as designed; The resistance to failure of a device or system; The ability of a device or system to perform a required function under stated

conditions for a specified period of time; The probability that a functional unit will perform its required function for a

specified interval under stated conditions. The ability of something to "fail well" (fail without catastrophic consequences)

Reliability engineers rely heavily on statistics, probability theory, and reliability theory. Many engineering techniques are used in reliability engineering, such as reliability prediction, Weibull analysis, thermal management, reliability testing and accelerated life testing. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks that will be performed for that specific system.

The function of reliability engineering is to develop the reliability requirements for the product, establish an adequate reliability program, and perform appropriate analyses and tasks to ensure the product will meet its requirements. These tasks are managed by a reliability engineer, who usually holds an accredited engineering degree and has additional reliability-specific education and training. Reliability engineering is closely associated with maintainability engineering and logistics engineering. Many problems from other fields, such as security engineering, can also be approached using reliability engineering techniques. This article provides an overview of some of the most common

Page 109: AEROSPACE

reliability engineering tasks. Please see the references for a more comprehensive treatment.

Many types of engineering employ reliability engineers and use the tools and methodology of reliability engineering. For example:

System engineers design complex systems having a specified reliability Mechanical engineers may have to design a machine or system with a specified

reliability Automotive engineers have reliability requirements for the automobiles (and

components) which they design Electronics engineers must design and test their products for reliability

requirements. In software engineering and systems engineering the reliability engineering is

the subdiscipline of ensuring that a system (or a device in general) will perform its intended function(s) when operated in a specified manner for a specified length of time. Reliability engineering is performed throughout the entire life cycle of a system, including development, test, production and operation.

Reliability theory

Reliability theory is the foundation of reliability engineering. For engineering purposes, reliability is defined as:

the probability that a device will perform its intended function during a specified period of time under stated conditions.

Mathematically, this may be expressed as,

,

where is the failure probability density function and t is the length of the period (which is assumed to start from time zero).

Reliability engineering is concerned with four key elements of this definition:

First, reliability is a probability. This means that failure is regarded as a random phenomenon: it is a recurring event, and we do not express any information on individual failures, the causes of failures, or relationships between failures, except that the likelihood for failures to occur varies over time according to the given probability function. Reliability engineering is concerned with meeting the specified probability of success, at a specified statistical confidence level.

Page 110: AEROSPACE

Second, reliability is predicated on "intended function:" Generally, this is taken to mean operation without failure. However, even if no individual part of the system fails, but the system as a whole does not do what was intended, then it is still charged against the system reliability. The system requirements specification is the criterion against which reliability is measured.

Third, reliability applies to a specified period of time. In practical terms, this means that a system has a specified chance that it will operate without failure before time . Reliability engineering ensures that components and materials will meet the requirements during the specified time. Units other than time may sometimes be used. The automotive industry might specify reliability in terms of miles, the military might specify reliability of a gun for a certain number of rounds fired. A piece of mechanical equipment may have a reliability rating value in terms of cycles of use.

Fourth, reliability is restricted to operation under stated conditions. This constraint is necessary because it is impossible to design a system for unlimited conditions. A Mars Rover will have different specified conditions than the family car. The operating environment must be addressed during design and testing.

Reliability program plan

Many tasks, methods, and tools can be used to achieve reliability. Every system requires a different level of reliability. A commercial airliner must operate under a wide range of conditions. The consequences of failure are grave, but there is a correspondingly higher budget. A pencil sharpener may be more reliable than an airliner, but has a much different set of operational conditions, insignificant consequences of failure, and a much lower budget.

A reliability program plan is used to document exactly what tasks, methods, tools, analyses, and tests are required for a particular system. For complex systems, the reliability program plan is a separate document. For simple systems, it may be combined with the systems engineering management plan. The reliability program plan is essential for a successful reliability program and is developed early during system development. It specifies not only what the reliability engineer does, but also the tasks performed by others. The reliability program plan is approved by top program management.

Reliability requirements

Page 111: AEROSPACE

For any system, one of the first tasks of reliability engineering is to adequately specify the reliability requirements. Reliability requirements address the system itself, test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system/subsystem requirements specifications, test plans, and contract statements.

System reliability parameters

Requirements are specified using reliability parameters. The most common reliability parameter is the mean-time-between-failure (MTBF), which can also be specified as the failure rate or the number of failures during a given period. These parameters are very useful for systems that are operated on a regular basis, such as most vehicles, machinery, and electronic equipment. Reliability increases as the MTBF increases. The MTBF is usually specified in hours, but can also be used with other units of measurement such as miles or cycles.

In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage.

A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobile airbags, thermal batteries and missiles. Single-shot reliability is specified as a probability of success, or is subsumed into a related parameter. Single-shot missile reliability may be incorporated into a requirement for the probability of hit.

For such systems, the probability of failure on demand (PFD) is the reliability measure. This PFD is derived from failure rate and mission time for non-repairable systems. For repairable systems, it is obtained from failure rate and MTTR and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In all cases, reliability parameters are specified with appropriate statistical confidence intervals.

Reliability modelling

Reliability modelling is the process of predicting or understanding the reliability of a component or system. Two separate fields of investigation are common: The physics of failure approach uses an understanding of the failure mechanisms involved, such as crack propagation or chemical corrosion; The parts stress modelling approach is an

Page 112: AEROSPACE

empirical method for prediction based on counting the number and type of components of the system, and the stress they undergo during operation.

For systems with a clearly defined failure time (which is sometimes not given for systems with a drifting parameter), the empirical distribution function of these failure times can be determined. This is done in general in an accelerated experiment with increased stress. These experiments can be divided into two main categories:

Early failure rate studies determine the distribution with a decreasing failure rate over the first part of the bathtub curve. Here in general only moderate stress is necessary. The stress is applied for a limited period of time in what is called a censored test. Therefore, only the part of the distribution with early failures can be determined.

In so-called zero defect experiments, only limited information about the failure distribution is acquired. Here the stress, stress time, or the sample size is so low that not a single failure occurs. Due to the insufficient sample size, only an upper limit of the early failure rate can be determined. At any rate, it looks good for the customer if there are no failures.

In a study of the intrinsic failure distribution, which is often a material property, higher stresses are necessary to get failure in a reasonable period of time. Several degrees of stress have to be applied to determine an acceleration model. The empirical failure distribution is often parametrised with a Weibull or a log-normal model.

It is a general praxis to model the early failure rate with an exponential distribution. This less complex model for the failure distribution has only one parameter: the constant failure rate. In such cases, the Chi-square distribution can be used to find the goodness of fit for the estimated failure rate. Compared to a model with a decreasing failure rate, this is quite pessimistic. Combined with a zero-defect experiment this becomes even more pessimistic. The effort is greatly reduced in this case: one does not have to determine a second model parameter (e.g. the shape parameter of a Weibull distribution, or its confidence interval (e.g by an MLE / Maximum likelihood approach) - and the sample size is much smaller.

Reliability test requirements

Because reliability is a probability, even highly reliable systems have some chance of failure. However, testing reliability requirements is problematic for several reasons. A single test is insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical. Reliability engineering is used to design a realistic and affordable test program that provides enough evidence that the system meets its requirement. Statistical confidence levels are

Page 113: AEROSPACE

used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, an MTBF of 1000 hours at 90% confidence level. From this specification, the reliability engineer can design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed.

The combination of reliability parameter value and confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements. Reliability testing may be performed at various levels, such as component, subsystem, and system. Also, many factors must be addressed during testing, such as extreme temperature and humidity, shock, vibration, and heat. Reliability engineering determines an effective test strategy so that all parts are exercised in relevant environments. For systems that must last many years, reliability engineering may be used to design an accelerated life test.

Requirements for reliability tasks

Reliability engineering must also address requirements for various reliability tasks and documentation during system development, test, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332.

Design for reliability

Design For Reliability (DFR), is an emerging discipline that refers to the process of designing reliability into products. This process encompasses several tools and practices and describes the order of their deployment that an organization needs to have in place in order to drive reliability into their products. Typically, the first step in the DFR process is to set the system’s reliability requirements. Reliability must be "designed in" to the system. During system design, the top-level reliability requirements are then allocated to subsystems by design engineers and reliability engineers working together.

Reliability design begins with the development of a model. Reliability models use block diagrams and fault trees to provide a graphical means of evaluating the relationships between different parts of the system. These models incorporate predictions based on

Page 114: AEROSPACE

parts-count failure rates taken from historical data. While the predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives.

A Fault Tree Diagram

One of the most important design techniques is redundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. An automobile brake light might use two light bulbs. If one bulb fails, the brake light still operates using the other bulb. Redundancy significantly increases system reliability, and is often the only viable means of doing so. However, redundancy is difficult and expensive, and is therefore limited to critical parts of the system. Another design technique, physics of failure, relies on understanding the physical processes of stress, strength and failure at a very detailed level. Then the material or component can be re-designed to reduce the probability of failure. Another common design technique is component derating: Selecting components whose tolerance significantly exceeds the expected stress, as using a heavier gauge wire that exceeds the normal specification for the expected electrical current.

Many tasks, techniques and analyses are specific to particular industries and applications. Commonly these include:

Built-in test (BIT) Failure mode and effects analysis (FMEA) Reliability simulation modeling Thermal analysis Reliability Block Diagram analysis Fault tree analysis Sneak circuit analysis

Page 115: AEROSPACE

Accelerated Testing Reliability Growth analysis Weibull analysis Electromagnetic analysis Statistical interference

Results are presented during the system design reviews and logistics reviews. Reliability is just one requirement among many system requirements. Engineering trade studies are used to determine the optimum balance between reliability and other requirements and constraints.

Reliability testing

A Reliability Sequential Test Plan

The purpose of reliability testing is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements.

Reliability testing may be performed at several levels. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels. (The test level nomenclature varies among applications.) For example, performing environmental stress screening tests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. System reliability is calculated at each test level.

Page 116: AEROSPACE

Reliability growth techniques and failure reporting, analysis and corrective active systems (FRACAS) are often employed to improve reliability as testing progresses. The drawbacks to such extensive testing are time and expense. Customers may choose to accept more risk by eliminating some or all lower levels of testing.

It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; some failure modes may take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as accelerated life testing, design of experiments, and simulations.

The desired level of statistical confidence also plays an important role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. Good test requirements ensure that the customer and developer agree in advance on how reliability requirements will be tested.

A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather, and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.

As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule, and available resources. Test plans and procedures are developed for each reliability test, and results are documented in official reports.

Accelerated testing

The purpose of accelerated life testing in to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment.

Page 117: AEROSPACE

In such a test the product is expected to fail in the lab just as it would have failed in the field—but in much less time. The main objective of an accelerated test is either of the following:

To discover failure modes To predict the normal field life from the high stress lab life

Accelerated testing need planning and as following

Define objective and scope of the test Collect required information about the product Identify the stress(es) Determine level of stress(es) Conduct the Accelerated test and analyse the accelerated data.

Common way to determine a life stress relationship are

Arrhenius Model Eyring Model Inverse Power Law Model Temperature-Humidity Model Temperature Non-thermal Model

Software reliability

Software reliability is a special aspect of reliability engineering. System reliability, by definition, includes all parts of the system, including hardware, software, operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. There are significant differences, however, in how software and hardware behave. Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original unfailed state. However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.

Page 118: AEROSPACE

Despite this difference in the source of failure between software and hardware — software doesn’t wear out — some in the software reliability engineering community believe statistical models used in hardware reliability are nevertheless useful as a measure of software reliability, describing what we experience with software: the longer you run software, the higher the probability you’ll eventually use it in an untested manner and find a latent defect that results in a failure (Shooman 1987), (Musa 2005), (Denney 2005).

As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences. There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews, unit tests, configuration management, software metrics and software models to be used during software development.

A common reliability metric is the number of software faults, usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) goes down. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used.

Testing is even more important for software than hardware. Even the best software development process results in some software faults that are nearly undetectable until tested. As with hardware, software is tested at several levels, starting with individual units, through integration and full-up system testing. Unlike hardware, it is inadvisable to skip levels of software testing. During all phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At system level, mean-time-between-failure data is collected and used to estimate reliability. Unlike hardware, performing the exact same test on the exact same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics such as test coverage.

Eventually, the software is integrated with the hardware in the top-level system, and software reliability is subsumed by system reliability. The Software Engineering Institute's Capability Maturity Model is a common means of assessing the overall software development process for reliability and quality purposes.

Reliability operational assessment

Page 119: AEROSPACE

After a system is produced, reliability engineering during the system operation phase monitors, assesses, and corrects deficiencies. Data collection and analysis are the primary tools used. When possible, system failures and corrective actions are reported to the reliability engineering organization. The data is constantly analyzed using statistical techniques, such as Weibull analysis and linear regression, to ensure the system reliability meets the specification. Reliability data and estimates are also key inputs for system logistics. Data collection is highly dependent on the nature of the system. Most large organizations have quality control groups that collect failure data on vehicles, equipment, and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification.

Reliability organizations

Systems of any significant complexity are developed by organizations of people, such as a commercial company or a government agency. The reliability engineering organization must be consistent with the company's organizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.

There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance or specialty engineering organization, which may include reliability, maintainability, quality, safety, human factors, logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.

In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project on a day-to-day basis, but is actually employed and paid by a separate organization within the company.

Because reliability engineering is critical to early system design, it has become common for reliability engineers, however the organization is structured, to work as part of an integrated product team.

Page 120: AEROSPACE

Certification

The American Society for Quality has a program to become a Certified Reliability Engineer, CRE. Certification is based on education, experience, and a certification test: periodic recertification is required. The body of knowledge for the test includes: reliability management, design evaluation, product safety, statistical tools, design and development, modeling, reliability testing, collecting and using data, etc.

Reliability engineering education

Some Universities offer graduate degrees in Reliability Engineering (e.g., see University of Maryland). Other reliability engineers typically have an engineering degree, which can be in any field of engineering, from an accredited university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer may be registered as a Professional Engineer by the state, but this is not required by most employers. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the IEEE Reliability Society, the American Society for Quality (ASQ), and the Society of Reliability Engineers (SRE).

****************************************************************

Solid mechanics

Solid mechanics

Solid mechanics is the branch of mechanics, physics, and. mathematics that concerns the behavior of solid matter under external actions (e.g., external forces, temperature changes, applied displacements, etc.). It is part of a broader study known as continuum mechanics

A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity or Young's modulus.

Page 121: AEROSPACE

This region of deformation is known as the linearly elastic region.

Major topics

There are several standard models for how solid materials respond to stress:

1. Elastic – Linearly elastic materials can be described by the linear elasticity equations. A spring obeying Hooke's law is a one-dimensional linear version of a general elastic body. By definition, when the stress is removed, elastic deformation is fully recovered.

2. Viscoelastic – a material that is elastic, but also has damping: on loading, as well as on unloading, some work has to be done against the damping effects. This work is converted in heat within the material. This results in a hysteresis loop in the stress–strain curve.

3. Plastic – a material that, when the stress exceeds a threshold (yield stress), permanently changes its rest shape in response. The material commonly known as "plastic" is named after this property. Plastic deformation is not recovered on unloading, although generally the elastic deformation up to yield is.

One of the most common practical applications of Solid Mechanics is the Euler-Bernoulli beam equation.

Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them.

Typically, solid mechanics uses linear models to relate stresses and strains (see linear elasticity). However, real materials often exhibit non-linear behavior.

For more specific definitions of stress, strain, and the relationship between them, see strength of materials.

******************************************************************

Statics

Statics

Page 122: AEROSPACE

Example of a beam in static equilibrium. The sum of force and moment is zero.

Statics is the branch of physics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at rest under the action of external forces of equilibrium. In other words it is how forces are transmitted through the members in an object such as a crane from where it is applied on the object, the hanging end, to where it is supported from, the base of the crane. When in static equilibrium, the system is either at rest, or moving at constant velocity through its center of mass.

By Newton's second law, this situation implies that the net force and net torque (also known as moment) on every body in the system is zero, meaning that for every force bearing upon a member, there must be an equal and opposite force. From this constraint, such quantities as stress or pressure can be derived. The net forces equalling zero is known as the first condition for equilibrium, and the net torque equalling zero is known as the second condition for equilibrium. See statically determinate.

Statics is thoroughly used in the analysis of structures, for instance in architectural and structural engineering. Strength of materials is a related field of mechanics that relies heavily on the application of static equilibrium.

Hydrostatics, also known as fluid statics, is the study of fluids at rest. This analyzes systems in static equilibrium which involve forces due to mechanical fluids. The characteristic of any fluid at rest is that the force exerted on any particle of the fluid is the same in every direction. If the force is unequal the fluid will move in the direction of the resulting force. This concept was first formulated in a slightly extended form by the French mathematician and philosopher Blaise Pascal in 1647 and would be later known as Pascal's Law. This law has many important applications in hydraulics. Galileo also was a major figure in the development of hydrostatics.

Page 123: AEROSPACE

In economics, "static" analysis has substantially the same meaning as in physics. Since the time of Paul Samuelson's Foundations of Economic Analysis (1947), the focus has been on "comparative statics", i.e., the comparison of one static equilibrium to another, with little or no discussion of the process of going between them – except to note the exogenous changes that caused the movement.

In exploration geophysics, "statics" is used as a short form for "static correction", referring to bulk time shifts of a reflection seismogram to correct for the variations in elevation and velocity of the seismic pulse through the weathered and unconsolidated upper layers.

*****************************************************************