Top Banner
Modern Techniques of Materials Characterisation 1 MODERN TECHNIQUES OF MATERIALS CHARACTERISATION By : B. Ramesh, (Ph.D.), Associate Professor of Mechanical Engineering, St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119 Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering, St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119
184

Modern Techniques of Materials Characterisation

Nov 26, 2014

Download

Documents

B. Ramesh

Modern Techniques of Materials Characterisation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 1

MODERN TECHNIQUES OF MATERIALS CHARACTERISATION

By :

B. Ramesh, (Ph.D.), Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119Ph.D. Research Scholar,College of Engineering, Guindy campus,Anna University,Chennai-25.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 2: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 2

MS 9157 MODERN TECHNIQUES OF MATERIALS CHARACTERISATION L T P C3 0 0 3

AIM:

OBJECTIVE: Characterisation of materials is very important for studying the structure of materials and to interpret their properties

UNIT-I METALLOGRAPHIC TECHNIQUES 8Specimen preparation techniques, components of microscope, Resolution, depth of focus, polarized light, phase contrast, differential interference microscopy, hot stage and quantitative metallographic techniques

UNIT-II X-RAY DIFFRACTION TECHNIQUES 12Crystallography basics, characteristic spectrum, Bragg’s law, Diffraction methods – Laue, rotating crystal and powder methods. Intensity of diffracted beams –structure factor calculations and other factors. Cameras- Laue, Debye-Scherer cameras, Seeman-Bohlin focusing cameras. Diffractometer – general feature and optics, proportional, scintillating and Geiger counters.

UNIT-III APPLICATION OF X-RAY DIFFRACTION 9Determination of crystal structure, lattice parameter, phase diagram and residual stress – quantitative phase estimation, ASTM catalogue of Materials identification

UNIT-IV ELECTRON MICROSCOPY 8Construction and operation of Transmission electron microscope – Selected Area Electron Diffraction and image formation, specimen preparation techniques. Construction, modes of operation and application of Scanning electron microscope, Energy Dispersive Spectroscopy, Electron probe micro analysis (EPMA), Scanning Tunnelling Microscope (STM) and Atomic Force Microscope

UNIT-V CHEMICAL AND THERMAL ANALYSIS 8Basic principles, practice and applications of X-ray spectrometry, Wave dispersive X- ray spectrometry, Auger spectroscopy, Secondary ion mass spectroscopy – proton induced X-ray Emission spectroscopy, Differential thermal analysis, differential scanning calorimetry DSC and thermogravimetric analysis TGA

Total: 45TEXTBOOKS:1. Cullity, B. D.,“ Elements of X-ray diffraction”, Addison-Wesley Company Inc., New

York, 3rd Edition, 2000.2. Cherepin and Malik, “Experimental Techniques in Physical Metallurgy", Asia

Publishing Co. Bombay, 1968.

REFERENCE BOOKS:1. Brandon D. G, “Modern Techniques in Metallography”, Von Nostrand Inc NJ, USA,

1986..2. Thomas G., “Transmission electron microscopy of metals”, John Wiley, 1996.3. Weinberg, F., “Tools and Techniques in Physical Metallurgy”, Volume I & II, Marcel

and Decker, 1970Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,

St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 3: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 3

Unit I

Laws of reflection

Fig. : Diagram of specular reflection

If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows:

1. The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane.

2. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal.

Law of refraction

In optics and physics, Snell's law (also known as Descartes' law, the Snell–Descartes law, and the law of refraction) is a formula used to describe the relationship between the

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 4: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 4

angles of incidence and refraction, when referring to light or other waves passing through a boundary between two different isotropic media, such as water and glass. The law says that the ratio of the sines of the angles of incidence and of refraction is a constant that depends on the media. The refractive index can be calculated by rearranging the formula accordingly.

Named after Dutch mathematician Willebrord Snellius, one of its discoverers, Snell's law states that the ratio of the sines of the angles of incidence and refraction is equivalent to the ratio of velocities in the two media, or equivalent to the opposite ratio of the indices of refraction:

with each θ as the angle measured from the normal, v as the velocity of light in the respective medium (SI units are meters per second, or m/s) and n as the refractive index (which is unitless) of the respective medium.

Fig. : Refraction of light at the interface between two media of different refractive indices, with n2 > n1. Since the velocity is lower in the second medium (v2 < v1), the

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 5: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 5

angle of refraction θ2 is less than the angle of incidence θ1; that is, the ray in the higher-index medium is closer to the normal.

Numerical aperture

In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. The exact definition of the term varies slightly between different areas of optics.

In most areas of optics, and especially in microscopy, the numerical aperture of an optical system such as an objective lens is defined by

where n is the index of refraction of the medium in which the lens is working (1.0 for air, 1.33 for pure water, and up to 1.56 for oils), and θ is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. The angular aperture of the lens is approximately twice this value (within the paraxial approximation). The NA is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, NA generally refers to object-space NA unless otherwise noted.

In microscopy, NA is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved is proportional to λ/NA, where λ is the wavelength of the light. A lens with a larger numerical aperture will be able to visualize finer details than a lens with a smaller numerical aperture. Assuming quality (diffraction limited) optics, lenses with larger numerical apertures collect more light and will generally provide a brighter image, but will provide shallower depth of field.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 6: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 6

Fig. : The numerical aperture with respect to a point P depends on the half-angle θ of the maximum cone of light that can enter or exit the lens.

Depth of focus

Depth of focus is a lens optics concept that measures the tolerance of placement of the image plane (the film plane in a camera) in relation to the lens. In a camera, depth of focus indicates the tolerance of the film's displacement within the camera, and is therefore sometimes referred to as "lens-to-film tolerance."

Depth of focus vs depth of field

While the phrase depth of focus was historically used, and is sometimes still used, to mean depth of field, in modern times it is more often reserved for the image-side depth.

Depth of field is the range of distances in object space for which object points are imaged with acceptable sharpness with a fixed position of the image plane (the plane of the film or electronic sensor). Depth of focus can have two slightly different meanings. The first is the distance over which the image plane can be displaced while a single object plane remains in acceptably sharp focus;[1][2] the second is the image-side conjugate of depth of field.[2] With the first meaning, the depth of focus is symmetrical about the image plane; with the second, the depth of focus is greater on the far side of the image plane, though in most cases the distances are approximately equal.

Where depth of field often can be measured in macroscopic units such as meters and feet, depth of focus is typically measured in microscopic units such as fractions of a millimeter or thousandths of an inch.

The same factors that determine depth of field also determine depth of focus, but these factors can have different effects than they have in depth of field. Both depth of field and depth of focus increase with smaller apertures. For distant subjects (beyond macro range), depth of focus is relatively insensitive to focal length and subject distance, for a fixed f-number. In the macro region, depth of focus increases with longer focal length or closer subject distance, while depth of field decreases.

Refractive index

The refractive index or index of refraction of a substance is a measure of the speed of light in that substance. It is expressed as a ratio of the speed of light in vacuum relative to that in the considered medium.[note 1] The velocity at which light travels in vacuum is a physical constant, and the fastest speed at which energy or information can be

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 7: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 7

transferred. However, light travels slower through any given material, or medium, that is not vacuum. (See: light in a medium).[1][2][3][4]

A simple, mathematical description of refractive index is as follows:

n = velocity of light in a vacuum / velocity of light in medium

Hence, the refractive index of water is 1.33, meaning that light travels 1.33 times as fast in a vacuum as it does in water.

The refractive index, n, of a medium is defined as the ratio of the speed, c, of a wave phenomenon such as light or sound in a reference medium to the phase speed, vp, of the wave in the medium in question:

Fig. : Refraction of light at the interface between two media.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 8: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 8

Contrast

Contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background. In visual perception of the real world, contrast is determined by the difference in the color and brightness of the object and other objects within the same field of view.

Fig. : Changes in the amount of contrast in a photo

Birefringence

Birefringence, or double refraction, is the decomposition of a ray of light (and other electromagnetic radiation) into two rays (the ordinary ray and the extraordinary ray) when it passes through certain types of material, such as calcite crystals or boron nitride,

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 9: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 9

depending on the polarization of the light. This effect can occur only if the structure of the material is anisotropic (directionally dependent). If the material has a single axis of anisotropy or optical axis (i.e. it is uniaxial), birefringence can be formalized by assigning two different refractive indices to the material for different polarizations. The birefringence magnitude is then defined by

where ne and no are the refractive indices for polarizations parallel (extraordinary) and perpendicular (ordinary) to the axis of anisotropy respectively.[1]

The reason for birefringence is the fact that in anisotropic media the electric field vector

and the dielectric displacement can be nonparallel (namely for the extraordinary polarisation), although being linearly related.

Birefringence can also arise in magnetic, not dielectric, materials, but substantial variations in magnetic permeability of materials are rare at optical frequencies. Liquid crystal materials as used in Liquid Crystal Displays (LCDs) are also birefringent.[2]

Fig.: Rays passing through a positively birefringent material. The optical axis is perpendicular to the direction of the rays, so the ray polarized perpendicularly to the optic axis has a greater refractive index than the ray polarized parallel to it.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 10: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 10

Polarization

Polarization (also polarisation) is a property of certain types of waves that describes the orientation of their oscillations. Electromagnetic waves, such as light, and gravitational waves exhibit polarization; acoustic waves (sound waves) in a gas or liquid do not have polarization because the direction of vibration and direction of propagation are the same.

By convention, the polarization of light is described by specifying the orientation of the wave's electric field at a point in space over one period of the oscillation. When light travels in free space, in most cases it propagates as a transverse wave—the polarization is perpendicular to the wave's direction of travel. In this case, the electric field may be oriented in a single direction (linear polarization), or it may rotate as the wave travels (circular or elliptical polarization). In the latter cases, the oscillations can rotate either towards the right or towards the left in the direction of travel.

Polarization state

The shape traced out in a fixed plane by the electric vector as such a plane wave passes over it (a Lissajous figure) is a description of the polarization state. The following figures show some examples of the evolution of the electric field vector (black), with time(the vertical axes), at a particular point in space, along with its x and y components (red/left and blue/right), and the path traced by the tip of the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation.

In the leftmost figure above, the two orthogonal (perpendicular) components are in phase. In this case the ratio of the strengths of the two components is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarization. The direction of this line depends on the relative amplitudes of the two components.

In the middle figure, the two orthogonal components have exactly the same amplitude and are exactly ninety degrees out of phase. In this case one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the x component can be ninety degrees ahead of the y component or it can be ninety degrees behind the y component. In this special case the electric vector traces out a circle in the plane, so this special case is called circular polarization. The direction the field rotates in depends on which of the two phase relationships exists. These cases are called right-hand circular polarization and left-hand circular polarization, depending on which way the electric vector rotates and the chosen convention.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 11: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 11

Linear Circular Elliptical

Another case is when the two components are not in phase and either do not have the same amplitude or are not ninety degrees out of phase, though their phase offset and their amplitude ratio are constant.[2] This kind of polarization is called elliptical polarization because the electric vector traces out an ellipse in the plane (the polarization ellipse). This is shown in the above figure on the right.

The "Cartesian" decomposition of the electric field into x and y components is, of course, arbitrary. Plane waves of any polarization can be described instead by combining any two orthogonally polarized waves, for instance waves of opposite circular polarization. The Cartesian polarization decomposition is natural when dealing with reflection from surfaces, birefringent materials, or synchrotron radiation. The circularly polarized modes are a more useful basis for the study of light propagation in stereoisomers.

Though this section discusses polarization for idealized plane waves, all the above is a very accurate description for most practical optical experiments which use TEM modes, including Gaussian optics.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 12: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 12

Unpolarized light

Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarized. If there is partial correlation between the emitters, the light is partially polarized. If the polarization is consistent across the spectrum of the source, partially polarized light can be described as a superposition of a completely unpolarized component, and a completely polarized one. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse.

The biological microscope usually is a transmission microscope with light coming from below.

The metallurgical microscope usually is a reflection microscope with light coming from above. Either by a external light source from above or through the lens with beam splitters.

Components of a microscope

Fig. : Basic optical transmission microscope elements(1990's)

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 13: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 13

All modern optical microscopes designed for viewing samples by transmitted light share the same basic components of the light path, listed here in the order the light travels through them:

Light source, a light or a mirror (7) Diaphragm and condenser lens (8) Objective (3) Ocular lens (eyepiece) (1)

In addition the vast majority of microscopes have the same 'structural' components:

Objective turret (to hold multiple objective lenses) (2) Stage (to hold the sample) (9) Focus wheel to move the stage (4 - coarse adjustment, 5 - fine adjustment)

Aberrations :

Summary of Aberrations

In an ideal optical system, all rays of light from a point in the object plane would converge to the same point in the image plane, forming a clear image. The influences which cause different rays to converge to different points are called aberrations.

 

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 14: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 14

Etchants :

In industry, etching, also known as chemical milling, is the process of using acids, bases or other chemicals to dissolve unwanted materials such as metals, semiconductor materials or glass. This process has been used on a wide variety of metals with depths of metal removal as large as 12mm (0.5 in).

Common etchants

For aluminium sodium hydroxide

For steels hydrochloric and nitric acids ferric chloride for stainless steels Nital (a mixture of nitric acid and ethanol, methanol, or methylated spirits for

mild steels.

2% Nital is common etchant for plain carbon steels.

For copper cupric chloride ferric chloride ammonium persulfate ammonia 25-50 % nitric acid. hydrochloric acid and hydrogen peroxide

For silica hydrofluoric acid (HF) is a very efficient etchant for silicon dioxide. It is however

very dangerous if it comes into contact with the body.

Peroxymonosulfuric acid, also known as persulfuric acid, peroxysulfuric acid, or as Caro's acid, is H2SO5, a liquid at room temperature.

Ammonium, sodium, and potassium salts of H2SO5 are used in the plastics industry as polymerization initiators, etchants, desizing agents, soil conditioner, and for decolorizing and deodorizing oils.

Optical microscope

The optical microscope, often referred to as the "light microscope", is a type of microscope which uses visible light and a system of lenses to magnify images of small

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 15: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 15

samples. Optical microscopes are the oldest design of microscope and were designed around 1600. Basic optical microscopes can be very simple, although there are many complex designs which aim to improve resolution and sample contrast. Historically optical microscopes were easy to develop and are popular because they use visible light so the sample can be directly observed by eye.

The image from an optical microscope can be captured by normal light-sensitive cameras to generate a micrograph. Originally images were captured by photographic film but modern developments in CMOS and later charge-coupled device (CCD) cameras allow the capture of digital images. Purely Digital microscopes are now available which just use a CCD camera to examine a sample, and the image is shown directly on a computer screen without the need for eye-pieces.

Alternatives to optical microscopy which do not use visible light include scanning electron microscopy and transmission electron microscopy.

Components

Fig. : Basic optical transmission microscope elements(1990's)

All modern optical microscopes designed for viewing samples by transmitted light share the same basic components of the light path, listed here in the order the light travels through them:

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 16: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 16

Light source, a light or a mirror (7) Diaphragm and condenser lens (8) Objective (3) Ocular lens (eyepiece) (1)

In addition the vast majority of microscopes have the same 'structural' components:

Objective turret (to hold multiple objective lenses) (2) Stage (to hold the sample) (9) Focus wheel to move the stage (4 - coarse adjustment, 5 - fine adjustment)

These entries are numbered according to the image on the right.

Ocular (eyepiece)

The ocular, or eyepiece, is a cylinder containing two or more lenses; its function is to bring the image into focus for the eye. The eyepiece is inserted into the top end of the body tube. Eyepieces are interchangeable and many different eyepieces can be inserted with different degrees of magnification. Typical magnification values for eyepieces include 2×, 5× and 10×. In some high performance microscopes, the optical configuration of the objective lens and eyepiece are matched to give the best possible optical performance. This occurs most commonly with apochromatic objectives.

Objective

The objective is a cylinder containing one or more lenses that are typically made of glass; its function is to collect light from the sample. At the lower end of the microscope tube one or more objective lenses are screwed into a circular nose piece which may be rotated to select the required objective lens. Typical magnification values of objective lenses are 4×, 5×, 10×, 20×, 40×, 50×, 60× and 100×. Some high performance objective lenses may require matched eyepieces to deliver the best optical performance.

Stage

The stage is a platform below the objective which supports the specimen being viewed. In the center of the stage is a hole through which light passes to illuminate the specimen. The stage usually has arms to hold slides (rectangular glass plates with typical dimensions of 25 mm by 75 mm, on which the specimen is mounted).

Light source

Many sources of light can be used. At its simplest, daylight is directed via a mirror. Most microscopes, however, have their own controllable light source - normally a halogen lamp.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 17: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 17

Condenser

The condenser is a lens designed to focus light from the illumination source onto the sample. The condenser may also include other features, such as a diaphragm and/or filters, to manage the quality and intensity of the illumination. For illumination techniques like dark field, phase contrast and differential interference contrast microscopy additional optical components must be precisely aligned in the light path.

Frame

The whole of the optical assembly is traditionally attached to a rigid arm which in turn is attached to a robust U shaped foot to provide the necessary rigidity. The arm angle may be adjustable to allow the viewing angle to be adjusted.

The frame provides a mounting point for various microscope controls. Normally this will include controls for focusing, typically a large knurled wheel to adjust coarse focus, together with a smaller knurled wheel to control fine focus. Other features may be lamp controls and/or controls for adjusting the condenser.

Objective lenses

On a typical compound optical microscope there are a selection of lenses available for different applications. Many different objective lenses with different properties and magnification are available.

Typically there will be around three objective lenses: a low power lens for scanning the sample, a medium power lens for normal observation and a high power lens for detailed observation. The typical magnification of objective lenses depends on the intended application, normal groups of lens magnificaitons may be [4×, 10×, 20×] for low magnification work and [10×, 40×, 100×] for high magnification work.

Objective lenses with higher magnifications normally have a higher numerical aperture and a shorter depth of field in the resulting image.

Oil immersion objective

Some microscopes make use of oil immersion lens. These objectives must be used with oil (immersion oil) between the objective lens and the sample. The refractive index of the immersion oil is higher than air and this allows the objective lens to have a larger numerical aperture. The larger numerical aperture allows collection of more light making detailed observation of faint details possible.

Immersion lenses are designed so that the refractive index of the oil and of the cover slip are closely matched so that the light is transmitted from the specimen to the outer face of the objective lens with minimal refraction. An oil immersion lens usually has a magnification of 40 to 100×.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 18: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 18

Magnification

The actual power or magnification of a compound optical microscope is the product of the powers of the ocular (eyepiece) and the objective lens. The maximum normal magnifications of the occular and objective are 10× and 100× respectively giving a final magnification of 1000×.

Magnification and micrographs

When using a camera to capture a micrograph the effective magnification of the image must take into account the size of the image. This is independent of whether it is on a print from a film negative or displayed digitally on a computer screen.

In the case of photographic film cameras the calculation is simple; the final magnification is the product of: the objective lens magnification, the camera optics magnification and the enlargement factor of the film print relative to the negative. A typical value of the enlargement factor is around 5× (for the case of 35mm film and a 6×4 inch print).

In the case of digital cameras the size of the pixels in the CMOS or CCD detector and the size of the pixels on the screen have to be known. The enlargement factor from the detector to the pixels on screen can then be calculated. As with a film camera the final magnification is the product of: the objective lens magnification, the camera optics magnification and the enlargement factor.

Operation :

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 19: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 19

Fig. : Optical path in a typical microscope

The optical components of a modern microscope are very complex and for a microscope to work well, the whole optical path has to be very accurately set up and controlled. Despite this, the basic operating principles of a microscope are quite simple.

The objective lens is, at its simplest, a very high powered magnifying glass i.e. a lens with a very short focal length. This is brought very close to the specimen being examined so that the light from the specimen comes to a focus about 160 mm inside the microscope tube. This creates an enlarged image of the subject. This image is inverted and can be seen by removing the eyepiece and placing a piece of tracing paper over the end of the tube. By carefully focusing a brightly lit specimen, a highly enlarged image can be seen. It is this real image that is viewed by the eyepiece lens that provides further enlargement.

In most microscopes, the eyepiece is a compound lens, with one component lens near the front and one near the back of the eyepiece tube. This forms an air-separated couplet. In many designs, the virtual image comes to a focus between the two lenses of the eyepiece, the first lens bringing the real image to a focus and the second lens enabling the eye to focus on the virtual image.

In all microscopes the image is intended to be viewed with the eyes focused at infinity (mind that the position of the eye in the above figure is determined by the eye's focus). Headaches and tired eyes after using a microscope are usually signs that the eye is being forced to focus at a close distance rather than at infinity.

The essential principle of the microscope is that an objective lens with very short focal length (often a few mm) is used to form a highly magnified real image of the object. Here, the quantity of interest is linear magnification, and this number is generally inscribed on the objective lens casing. In practice, today, this magnification is carried out by means of two lenses: the objective lens which creates an image at infinity, and a second weak tube lens which then forms a real image in its focal plane.[3]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 20: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 20

Aberrations of lenses

Lenses do not form perfect images, and there is always some degree of distortion or aberration introduced by the lens which causes the image to be an imperfect replica of the object. Careful design of the lens system for a particular application ensures that the aberration is minimized. There are several different types of aberration which can affect image quality.

Spherical aberration

Spherical aberration occurs because spherical surfaces are not the ideal shape with which to make a lens, but they are by far the simplest shape to which glass can be ground and polished and so are often used. Spherical aberration causes beams parallel to, but distant from, the lens axis to be focused in a slightly different place than beams close to the axis. This manifests itself as a blurring of the image. Lenses in which closer-to-ideal, non-spherical surfaces are used are called aspheric lenses. These were formerly complex to make and often extremely expensive, but advances in technology have greatly reduced the manufacturing cost for such lenses. Spherical aberration can be minimised by careful choice of the curvature of the surfaces for a particular application: for instance, a plano-convex lens which is used to focus a collimated beam produces a sharper focal spot when used with the convex side towards the beam source.

Coma

Another type of aberration is coma, which derives its name from the comet-like appearance of the aberrated image. Coma occurs when an object off the optical axis of the lens is imaged, where rays pass through the lens at an angle to the axis θ. Rays which

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 21: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 21

pass through the centre of the lens of focal length f are focused at a point with distance f tan θ from the axis. Rays passing through the outer margins of the lens are focused at different points, either further from the axis (positive coma) or closer to the axis (negative coma). In general, a bundle of parallel rays passing through the lens at a fixed distance from the centre of the lens are focused to a ring-shaped image in the focal plane, known as a comatic circle. The sum of all these circles results in a V-shaped or comet-like flare. As with spherical aberration, coma can be minimised (and in some cases eliminated) by choosing the curvature of the two lens surfaces to match the application. Lenses in which both spherical aberration and coma are minimised are called bestform lenses.

Chromatic aberration

Chromatic aberration is caused by the dispersion of the lens material—the variation of its refractive index, n, with the wavelength of light. Since, from the formulae above, f is dependent upon n, it follows that different wavelengths of light will be focused to different positions. Chromatic aberration of a lens is seen as fringes of colour around the image. It can be minimised by using an achromatic doublet (or achromat) in which two materials with differing dispersion are bonded together to form a single lens. This reduces the amount of chromatic aberration over a certain range of wavelengths, though it does not produce perfect correction. The use of achromats was an important step in the development of the optical microscope. An apochromat is a lens or lens system which has even better correction of chromatic aberration, combined with improved correction of spherical aberration. Apochromats are much more expensive than achromats.

Different lens materials may also be used to minimise chromatic aberration, such as specialised coatings or lenses made from the crystal fluorite. This naturally occurring

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 22: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 22

substance has the highest known Abbe number, indicating that the material has low dispersion.

Other types of aberration

Other kinds of aberration include field curvature, barrel and pincushion distortion, and astigmatism.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 23: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 23

Petzval field curvature, named for Joseph Petzval, describes the optical aberration in which a flat object normal to the optical axis (or a non-flat object past the hyperfocal distance) cannot be brought into focus on a flat image plane. Consider an "ideal" single-element lens system for which all planar wave fronts are focused to a point a distance f from the lens. Placing this lens the distance f from a flat image sensor, image points near the optical axis will be in perfect focus, but rays off axis will come into focus before the image sensor, dropping off by the cosine of the angle they make with the optical axis. This is less of a problem when the imaging surface is spherical, as in the human eye. Most photographic lenses are designed to minimize field curvature, and so effectively have a focal length that increases with ray angle.

Fig. : Field curvature: the image plane is not flat.

Barrel distortion

In "barrel distortion", image magnification decreases with distance from the optical axis. The apparent effect is that of an image which has been mapped around a sphere (or barrel). Fisheye lenses, which take hemispherical views, utilize this type of distortion as a way to map an infinitely wide object plane into a finite image area.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 24: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 24

Fig. : Barrel distortion simulation

Pincushion distortion

In "pincushion distortion", image magnification increases with the distance from the optical axis. The visible effect is that lines that do not go through the centre of the image are bowed inwards, towards the centre of the image, like a pincushion. A certain amount of pincushion distortion is often found with visual optical instruments, e.g. binoculars, where it serves to eliminate the globe effect.

Fig. : Pincushion distortion simulation

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 25: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 25

Phase contrast microscopy

Phase contrast microscopy is an optical microscopy illumination technique in which small phase shifts in the light passing through a transparent specimen are converted into amplitude or contrast changes in the image.

A phase contrast microscope does not require staining to view the slide. This type of microscope made it possible to study the cell cycle.

As light travels through a medium other than vacuum, interaction with this medium causes its amplitude and phase to change in a way which depends on properties of the medium. Changes in amplitude give rise to familiar absorption of light, which is wavelength dependent and gives rise to colours. The human eye measures only the energy of light arriving on the retina, so changes in phase are not easily observed, yet often these changes in phase carry a large amount of information.

The same holds in a typical microscope, i.e., although the phase variations introduced by the sample are preserved by the instrument (at least in the limit of the perfect imaging instrument) this information is lost in the process which measures the light. In order to make phase variations observable, it is necessary to combine the light passing through the sample with a reference so that the resulting interference reveals the phase structure of the sample.

This was first realized by Frits Zernike during his study of diffraction gratings. During these studies he appreciated both that it is necessary to interfere with a reference beam, and that to maximize the contrast achieved with the technique, it is necessary to introduce a phase shift to this reference so that the no-phase-change condition gives rise to completely destructive interference.

He later realised that the same technique can be applied to optical microscopy. The necessary phase shift is introduced by rings etched accurately onto glass plates so that they introduce the required phase shift when inserted into the optical path of the microscope. When in use, this technique allows phase of the light passing through the object under study to be inferred from the intensity of the image produced by the microscope. This is the phase-contrast technique.

In optical microscopy many objects such as cell parts in protozoans, bacteria and sperm tails are essentially fully transparent unless stained. (Staining is a difficult and time consuming procedure which sometimes, but not always, destroys or alters the specimen.) The difference in densities and composition within the imaged objects however often give rise to changes in the phase of light passing through them, hence they are sometimes called "phase objects". Using the phase-contrast technique makes these structures visible and allows their study with the specimen still alive.

This phase contrast technique proved to be such an advancement in microscopy that Zernike was awarded the Nobel prize (physics) in 1953.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 26: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 26

Background

The technique was invented by Frits Zernike in the 1930s for which he received the Nobel prize in physics in 1953. Phase-contrast microscopy is a mode available on most advanced light microscopes and is most commonly used to provide contrast of transparent specimens such as living cells or small organisms.

Description

1. Condenser annulus2. Object plane3. Phase plate4. Primary image plane

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 27: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 27

A practical implementation of phase-contrast illumination consists of a phase ring (located in a conjugated aperture plane somewhere behind the front lens element of the objective) and a matching annular ring, which is located in the primary aperture plane (location of the condenser's aperture).

Two selected light rays, which are emitted from one point inside the lamp's filament, get focused by the field lens exactly inside the opening of the condenser annular ring. Since this location is precisely in the front focal plane of the condenser, the two light rays are then refracted in such way that they exit the condenser as parallel rays. Assuming that the two rays in question are neither refracted nor diffracted in the specimen plane (location of microscope slide), they enter the objective as parallel rays. Since all parallel rays are focused in the back focal plane of the objective, the back focal plane is a conjugated aperture plane to the condenser's front focal plane (also location of the condenser annulus). To complete the phase setup, a phase plate is positioned inside the back focal plane in such a way that it lines up nicely with the condenser annulus.

Only through correctly centering the two elements can phase contrast illumination be established. A phase centering telescope that temporarily replaces one of the oculars is used, first to focus the phase element plane and then center the annular illumination ring with the corresponding ring of the phase plate.

An interesting variant in phase contrast design was once implemented (by the microscope maker C. Baker, London) in which the conventional annular form of the two elements was replaced by a cross-shaped transmission slit in the substage and corresponding cross-shaped phase plates in the conjugate plane in the objectives. The advantage claimed here was that only a single slit aperture was needed for all phase objective magnifications. Recentring and rotational alignment of the cross by means of the telescope was nevertheless needed for each change in magnification.

Differential interference contrast microscopy

Differential interference contrast microscopy (DIC), also known as Nomarski Interference Contrast (NIC) or Nomarski microscopy, is an optical microscopy illumination technique used to enhance the contrast in unstained, transparent samples. DIC works on the principle of interferometry to gain information about the optical density of the sample, to see otherwise invisible features. A relatively complex lighting scheme produces an image with the object appearing black to white on a grey background. This image is similar to that obtained by phase contrast microscopy but without the bright diffraction halo.

DIC works by separating a polarised light source into two orthogonally polarized mutually coherent parts which are spatially displaced (sheared) at the sample plane, and recombined before observation. The interference of the two parts at recombination is sensitive to their optical path difference (i.e. the product of refractive index and

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 28: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 28

geometric path length). Adding an adjustable offset phase determining the interference at zero optical path difference in the sample, the contrast is proportional to the path length gradient along the shear direction, giving the appearance of a three-dimensional physical relief corresponding to the variation of optical density of the sample, emphasising lines and edges though not providing a topographically accurate image.

The light path

Fig. : The components of the basic differential interference contrast microscope setup.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 29: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 29

1. Unpolarised light enters the microscope and is polarised at 45°.

Polarised light is required for the technique to work.

2. The polarised light enters the first Nomarski-modified Wollaston prism and is separated into two rays polarised at 90° to each other, the sampling and reference rays.

Main article: Wollaston prismWollaston prisms are a type of prism made of two layers of a crystalline substance, such as quartz, which, due to the variation of refractive index depending on the polarisation of the light, splits the light according to its polarisation. The Nomarski prism causes the two rays to come to a focal point outside the body of the prism, and so allows greater flexibility when setting up the microscope, as the prism can be actively focused.

3. The two rays are focused by the condenser for passage through the sample. These two rays are focused so they will pass through two adjacent points in the sample, around 0.2 μm apart.

The sample is effectively illuminated by two coherent light sources, one with 0° polarisation and the other with 90° polarisation. These two illuminations are, however, not quite aligned, with one lying slightly offset with respect to the other.

Fig. : The route of light through a DIC microscope. The two light beams should be parallel between condenser and objective

4. The rays travel through adjacent areas of the sample, separated by the shear. The separation is normally similar to the resolution of the microscope. They will experience different optical path lengths where the areas differ in refractive index or thickness. This

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 30: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 30

causes a change in phase of one ray relative to the other due to the delay experienced by the wave in the more optically dense material.

The passage of many pairs of rays through pairs of adjacent points in the sample (and their absorbance, refraction and scattering by the sample) means an image of the sample will now be carried by both the 0° and 90° polarised light. These, if looked at individually, would be bright field images of the sample, slightly offset from each other. The light also carries information about the image invisible to the human eye, the phase of the light. This is vital later. The different polarisations prevent interference between these two images at this point.

5. The rays travel through the objective lens and are focused for the second Nomarski-modified Wollaston prism.

6. The second prism recombines the two rays into one polarised at 135°. The combination of the rays leads to interference, brightening or darkening the image at that point according to the optical path difference.

This prism overlays the two bright field images and aligns their polarisations so they can interfere. However, the images do not quite line up because of the offset in illumination - this means that instead of interference occurring between 2 rays of light that passed through the same point in the specimen, interference occurs between rays of light that went through adjacent points which therefore have a slightly different phase. Because the difference in phase is due to the difference in optical path length, this recombination of light causes "optical differentiation" of the optical path length, generating the image seen.

Advantages and disadvantages

DIC has strong advantages in uses involving live and unstained biological samples, such as a smear from a tissue culture or individual water borne single-celled organisms. Its resolution[specify] and clarity in conditions such as this are unrivaled among standard optical microscopy techniques.

The main limitation of DIC is its requirement for a transparent sample of fairly similar refractive index to its surroundings. DIC is unsuitable (in biology) for thick samples, such as tissue slices, and highly pigmented cells. DIC is also unsuitable for most non biological uses because of its dependence on polarisation, which many physical samples would affect.

One non-biological area where DIC is useful is in the analysis of planar silicon semiconductor processing. The thin (typically 100-1000 nm) films in silicon processing are often mostly transparent to visible light (e.g., silicon dioxide, silicon nitride and

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 31: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 31

polycrystalline silicon), and defects in them or contamination lying on top of them become more visible. This also enables the determination of whether a feature is a pit in the substrate material or a blob of foreign material on top. Etched crystalline features gain a particularly striking appearance under DIC.

Image quality, when used under suitable conditions, is outstanding in resolution and almost entirely free of artifacts unlike phase contrast. However analysis of DIC images must always take into account the orientation of the Wollaston prisms and the apparent lighting direction, as features parallel to this will not be visible. This is, however, easily overcome by simply rotating the sample and observing changes in the image.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 32: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 32

Unit IV

Primary electron :

A primary electron is usually a high energy electron which starts outside the crystal (e.g. in the beam of an electron microscope). It may be elastically scattered or may excite various processes in the crystal by being inelastically scattered.

Secondary electrons of various types can be emitted from a solid following its bombardment with primary electrons.

Secondary electron :

A secondary electron arises as a result of the interaction of a primary electron with a specimen. In principle the term refers to all electrons emitted from a specimen after it has been bombarded with primary electrons, X-rays or other radiation. In practice the phrase most commonly refers to low-energy electrons (kinetic energy less than 50eV) emitted from the specimen in a scanning electron microscope (SEM).

Back scattered electron:

A backscattered electron is a high energy primary electron which suffers large angle (> 90°) scattering and re-emerges from the entry surface of a specimen. Backscattered electrons usually have energies close to that of the primary electron beam. They are of greatest interest to SEM users, giving surface sensitive information.

Auger electron :

An Auger electron has characteristic energy related to the electronic transitions within the atom which have caused it to be emitted. Emission of an Auger electron is an alternative to the emission of a characteristic X-ray. The energy of an Auger electron, EA, is given by

EA = E1 - E2 - E3, where

E1 = energy of atom with inner-shell vacancy, E2 = energy of atom with outer-shell vacancy, and E3 = binding energy of emitted (Auger) electron.

Binding energy :

The binding energy of a particular electron is the energy which would be required to remove it from the atom to an infinite distance.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 33: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 33

e.g. K (1s) electrons in aluminium = 1559 eV

Auger emission example:

If a K-shell electron is knocked out of an atom, the resulting inner-shell (K) vacancy can be filled by an outer-shell (e.g. L2) electron. If the resulting energy difference is lost be the emission of an L3 electron this will be known as a K-L2, L3 Auger electron.

It will have characteristic energy

EK-L2,L3 = EK - EL2 - Ebinding, L3

Characteristic X-ray :

A characteristic X-ray can be emitted from an excited atom when an outer-shell (e.g. L) electron jumps in to fill an inner-shell (e.g. K) vacancy. It has an energy characteristic of the atom and can therefore be used for analytical purposes. Its energy is the difference between the energies of the atom with an inner-shell vacancy and the same atom with an outer-shell vacancy.

The emission of the excess energy when an atom de-excites (decays or relaxes) can alternatively be achieved by the production of an Auger electron.

Cathodoluminescence :

Cathodoluminescence is the emission of light in response to irradiation by electrons.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 34: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 34

Figure :Generalized illustration of interaction volumes for various electron-specimen interactions. Auger electrons (not shown) emerge from a very thin region of the sample surface (maximum depth about 50 Å) than do secondary electrons (50-500 Å).

Differences between TEM and SEM:

TEM SEM

Electron beam passes through thin sample.

Electron beam scans over surface of sample.

Specially prepared thin samples or particulate material are supported on TEM grids.

Sample can be any thickness and is mounted on an aluminum stub.

Specimen stage halfway down column. Specimen stage in the chamber at the bottom of the column.

Image shown on fluorescent screen. Image shown on TV monitor.

Image is a two dimensional projection of the sample.

Image is of the surface of the sample.

Resolution : 0.2 nm 2 nm

Magnification: 500 000 X 200 000 X

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 35: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 35

A TEM (transmission electron microscope) images using the electrons that pass through it. A TEM image takes a bit more interpretation as we’re not used to seeing images of light that’s passed through things

A SEM (scanning electron microscope) images using the electrons reflected from a specimen. The image from an SEM thus looks somewhat like a normal photo (we’re used to imaging using the light reflected from objects).

A TEM is a 'Transmission Electron Microscope'. A very thin specimen, coated in gold, is inserted in the specimen chamber of the microscope. An electron beam is then directed through the specimen and produced a negative image on a plate coated with a phosphorus coating. Photographs are taken of the image by opening a trap door in the plate and exposing negative film or electronic sensors for a digital image.

A SEM is a 'Scanning Electron Microscope'. This is the type where you insert a specimen into the scanning chamber and an electron beam scans the surface of the speciman. The electron beam knocks electrons away from the specimen and a sensor captures the electrons. The captured electrons are then converted by electronic to a image displayed on a monitor. Pictures of this electronic image can also be printed.

Difference Between AFM and STM

AFM vs STM

AFM refers to Atomic Force Microscope and STM refers to Scanning Tunneling Microscope. The development of these two microscopes is considered a revolution in the atomic and molecular fields.

When talking of AFM, it captures precise images by moving a nanometer sized tip across the surface of the image. The STM captures images using quantum tunneling.

Of the two microscopes, the Scanning Tunneling Microscope was the first to be developed.

Unlike the STM, the probe makes a direct contact with the surface or calculates the incipient chemical bonding in AFM. The STM images indirectly by calculating the quantum degree tunneling between he probe and sample.

Another difference that can be seen is that the tip in AFM touches the surface gently touches the surface whereas in STM, the tip is kept at a short distance from the surface.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 36: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 36

Unlike the STM, the AFM does not measure the tunneling current but only measures the small force between the surface and the tip.

It has also been seen that the AFM resolution is better than the STM. This is why AFM is widely used in nano-technology. When talking of the dependence between force and distance, the AFM is more complex than the STM.

When Scanning Tunneling Microscope is normally applicable to conductors, the Atomic Force Microscope is applicable to both conductors and insulators. The AFM suits well with liquid and gas environments whereas STM operates only in high vacuum.

When compared to STM, the AFM gives a more topographic contrast direct height measurement and better surface features.

Summary

1. AFM captures precise images by moving a nanometer sized tip across the surface of the image. The STM captures images using quantum tunneling.

2. The probe makes a direct contact with the surface or calculates the incipient chemical bonding in AFM. The STM images indirectly by calculating the quantum degree tunneling between he probe and sample.

3. The tip in AFM touches the surface gently touches the surface whereas in STM, the tip is kept at a short distance from the surface.

4. AFM resolution is better than the STM. This is why AFM is widely used in nano-technology.

5. When Scanning Tunneling Microscope is normally applicable to conductors, the Atomic Force Microscope is applicable to both conductors and insulators.

6. The AFM suits well with liquid and gas environments whereas STM operates only in high vacuum.

7. Of the two microscopes, the Scanning Tunneling Microscope was the first to be developed.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 37: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 37

STM Vs AFM

Scanning Tunneling Microscopy (STM) and Atomic Force Microscopy (AFM) or scanning force microscopy (SFM) are inventions of Scanning Probe microscopy a technique that forms images of surfaces using a physical probe that scans the specimen. An image of the surface is obtained by mechanically moving the probe in a raster scan of the specimen, line by line, and recording the probe-surface interaction as a function of position. STM is a powerful instrument that is used for imaging surfaces at the atomic level while AFM is one of the primary tools for imaging, measuring, and manipulating matter at the Nano-scale.

INVENTED: Scanning Tunneling Microscopy (STM) was invented in 1981 and was developed by Gerd Binnig and Heinrich Rohrer. Atomic Force Microscopy (AFM) was invented in 1985 and was also developed by Gerd Binnig and Heinrich Rohrer.

IMAGE: STM gives two-dimensional image of the atoms. AFM gives three-dimensional surface profile of the Nano-objects.

RESOLUTION: STM gives better resolution than AFM because of the exponential dependence of the tunneling current on distance. The force-distance dependence in AFM is much more complex when characteristics such as tip shape and contact force are considered.

CONSISTED OF: STM uses a sharpened conducting tip. AFM uses a conductive AFM cantilever (typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers) with a sharp tip (probe) at its end that is used to scan the specimen surface.

DEPENDED ON: STM relies on electrical current between the tip and the surface. AFM relies on movement due to the electromagnetic forces between atoms.

TUNNELING CURRENT: STM record the tunneling current. AFM does not record the tunneling current but the small force between the tip and the surface.

TIP USED: STM uses a sharpened conducting tip (metallic tip). AFM uses a conductive AFM cantilever.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 38: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 38

INTERACTION: In case of STM Interaction between probe and material surface is monitored is tunneling current. While in AFM Interaction between probe and material surface is monitored is van der Waals force.

PHYSICAL CONTACT: In STM Tip and substrate are in very close proximity but not actually in physical contact.

While in AFM Tip and substrate are actually in physical contact.

ATTACHMENT OF TIP: Tip is not attached to a tiny leaf spring in case of Scanning tunneling microscopy. In Atomic force microscope Tip is attached to a tiny leaf spring, the cantilever, which has a low spring constant. Bending of this cantilever is detected, often with the use of a laser beam, which is reflected from the cantilever.

MOUNTED ON: Tip mounts on the scanner when we have scanning tunneling microscope. Sample mounts on the scanner when we have atomic force microscope.

TIP SPACE: STM's Tip is kept at a short distance from the surface. While AFM's Tip is not kept at a short distance from the surface but it gently touches the surface.

VISUALIZATION: STM can visualize and even manipulate atoms. AFM can easily image non-conducting objects i.e., DNA and proteins etc.

USED FOR: STM is a powerful instrument that is used for imaging surfaces at the atomic level. STM is being used for the conductance of single molecule. The AFM is one of the primary tools for imaging, measuring, and manipulating matter at the Nano-scale.

ADVANTAGES & DISADVANTAGES: • In STM the two parameters are integrally linked for voltage calculation.

• AFM offers the advantage that the writing voltage and tip-to-substrate spacing can be controlled independently.

• AFM gives three-dimensional image while STM only gives two-dimensional image. This is the advantage of AFM over STM.

• Resolution of STM is higher than AFM. STM gives true atomic resolution.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 39: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 39

• An AFM cannot scan images as fast as a STM, requiring several minutes for a typical scan, while a STM is capable of scanning at near real-time, although at relatively low quality.

Depth of field and depth of focus

The depth of field, Dob is the range of distance along the optical axis in which the specimen can move without the image appearing to lose sharpness. This obviously depends on the resolution of the microscope.

The depth of focus, Dim is the extent of the region around the image plane in which the image will appear to be sharp. This depends on magnification, MT.

Both depth of field and depth of focus are strongly dependent on changes in aperture (hence the semiangle ) and working distance (dob).

http://www.matter.org.uk/tem/depth_of_field.htm

Dark field microscopy

Dark field microscopy (dark ground microscopy) describes microscopy methods, in both light and electron microscopy, which exclude the unscattered beam from the image. As a result, the field around the specimen (i.e. where there is no specimen to scatter the beam) is generally dark.

Light Microscopy Applications

In optical microscopy, darkfield describes an illumination technique used to enhance the contrast in unstained samples. It works by illuminating the sample with light that will not be collected by the objective lens, and thus will not form part of the image. This produces the classic appearance of a dark, almost black, background with bright objects on it.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 40: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 40

The light's path

The steps are illustrated in the figure where an upright microscope is used.

Fig. : Diagram illustrating the light path through a dark field microscope.1. Light enters the microscope for illumination of the sample.2. A specially sized disc, the patch stop (see figure) blocks some light from the light

source, leaving an outer ring of illumination.3. The condenser lens focuses the light towards the sample.4. The light enters the sample. Most is directly transmitted, while some is scattered

from the sample.5. The scattered light enters the objective lens, while the directly transmitted light

simply misses the lens and is not collected due to a direct illumination block (see figure).

6. Only the scattered light goes on to produce the image, while the directly transmitted light is omitted.

Advantages and disadvantagesCompiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,

St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 41: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 41

.Dark field microscopy is a very simple yet effective technique and well suited for uses involving live and unstained biological samples, such as a smear from a tissue culture or individual water-borne single-celled organisms. Considering the simplicity of the setup, the quality of images obtained from this technique is impressive.

The main limitation of dark field microscopy is the low light levels seen in the final image. This means the sample must be very strongly illuminated, which can cause damage to the sample. Dark field microscopy techniques are almost entirely free of artifacts, due to the nature of the process. However the interpretation of dark field images must be done with great care as common dark features of bright field microscopy images may be invisible, and vice versa.

While the dark field image may first appear to be a negative of the bright field image, different effects are visible in each. In bright field microscopy, features are visible where either a shadow is cast on the surface by the incident light, or a part of the surface is less reflective, possibly by the presence of pits or scratches. Raised features that are too smooth to cast shadows will not appear in bright field images, but the light that reflects off the sides of the feature will be visible in the dark field images.

Bright field microscopy

Bright field microscopy is the simplest of all the optical microscopy illumination techniques. Sample illumination is transmitted (i.e., illuminated from below and observed from above) white light and contrast in the sample is caused by absorbance of some of the transmitted light in dense areas of the sample. Bright field microscopy is the simplest of a range of techniques used for illumination of samples in light microscopes and its simplicity makes it a popular technique. The typical appearance of a bright field microscopy image is a dark sample on a bright background, hence the name.

Light path

The light path of a bright field microscope is extremely simple, no additional components are required beyond the normal light microscope setup. The light path therefore consists of:

Transillumination light source, commonly a halogen lamp in the microscope stand.

Condenser lens which focusses light from the light source onto the sample. Objective lens which collects light from the sample and magnifies the image. Oculars and/or a camera to view the sample image.

Bright field microscopy may use critical or Köhler illumination to illuminate the sample.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 42: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 42

Performance

Bright field microscopy typically has low contrast with most biological samples as few absorb light to a great extent. Stains are often required to increase contrast which prevents use on live cells in many situations. Bright field illumination is useful for samples which have an intrinsic colour, for example chloroplasts in plant cells.

Bright field microscopy is a standard light microscopy technique and therefore magnification is limited by the resolving power possible with the wavelength of visible light.

Summary

Advantages

Simplicity of setup with only basic equipment required.

Limitations

Very low contrast of most biological samples. Low apparent optical resolution due to the blur of out of focus material. The sample has to be stained before viewing. Therefore, live cells cannot be

viewed.

Enhancements

Reducing or increasing the amount of the light source via the iris diaphragm. Use of an oil immersion objective lens and a special immersion oil placed on a

glass cover over the specimen. Immersion oil has the same refraction as glass and improves the resolution of the observed specimen.

Use of sample staining methods for use in microbiology, such as simple stains (Methylene blue, Safranin, Crystal violet) and differential stains (Negative stains, flagellar stains, endospore stains).

Use of a colored (usually blue) or polarizing filter on the light source to highlight features not visible under white light. The use of filters is especially useful with mineral samples.

Comparison of transilumination techniques used to generate contrast in a sample of tissue paper. 1.559 μm/pixel.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 43: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 43

Dark field illumination, sample contrast comes from light scattered by the sample.

Bright field illumination, sample contrast comes from absorbance of light in the sample.

Transmission electron microscopy

Transmission electron microscopy (TEM) is a microscopy technique whereby a beam of electrons is transmitted through an ultra thin specimen, interacting with the specimen as it passes through. An image is formed from the interaction of the electrons transmitted through the specimen; the image is magnified and focused onto an imaging device, such as a fluorescent screen, on a layer of photographic film, or to be detected by a sensor such as a CCD camera.

TEMs are capable of imaging at a significantly higher resolution than light microscopes, owing to the small de Broglie wavelength of electrons. This enables the instrument's user to examine fine detail—even as small as a single column of atoms, which is tens of thousands times smaller than the smallest resolvable object in a light microscope. TEM forms a major analysis method in a range of scientific fields, in both physical and biological sciences. TEMs find application in cancer research, virology, materials science as well as pollution and semiconductor research.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 44: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 44

At smaller magnifications TEM image contrast is due to absorption of electrons in the material, due to the thickness and composition of the material. At higher magnifications complex wave interactions modulate the intensity of the image, requiring expert analysis of observed images. Alternate modes of use allow for the TEM to observe modulations in chemical identity, crystal orientation, electronic structure and sample induced electron phase shift as well as the regular absorption based imaging.

The first TEM was built by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolving power greater than that of light in 1933 and the first commercial TEM in 1939.

Uses:

The transmission electron microscope is used to characterize the microstructure of materials with very high spatial resolution. Information about the morphology, crystal structure and defects, crystal phases and composition, and magnetic microstructure can be obtained by a combination of electron-optical imaging (sub-Ångstrom in the Titan, 2.5 Å point resolution in the Tecnai), electron diffraction, and small probe capabilities. Further, the Titan provides significant in situ capabilities, allowing for the investigation of how material structure can evolve due to different environmental factors. The trade-off for this diverse range of structural information and high resolution is the challenge of producing very thin samples for electron transmission.

Principles of operation:

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 45: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 45

The transmission electron microscope uses a high energy electron beam transmitted through a very thin sample to image and analyze the microstructure of materials with atomic scale resolution. The electrons are focused with electromagnetic lenses and the image is observed on a fluorescent screen, or recorded on film or digital camera. The electrons are accelerated at several hundred kV, giving wavelengths much smaller than that of light: 200kV electrons have a wavelength of 0.025Å. However, whereas the resolution of the optical microscope is limited by the wavelength of light, that of the electron microscope is limited by aberrations inherent in electromagnetic lenses, to about 1-2 Å.

Because even for very thin samples one is looking through many atoms, one does not usually see individual atoms. Rather the high resolution imaging mode of the microscope images the crystal lattice of a material as an interference pattern between the transmitted and diffracted beams. This allows one to observe planar and line defects, grain boundaries, interfaces, etc. with atomic scale resolution. The brightfield/darkfield imaging modes of the microscope, which operate at intermediate magnification, combined with electron diffraction, are also invaluable for giving information about the morphology, crystal phases, and defects in a material. Finally the microscope is equipped with a special imaging lens allowing for the observation of micromagnetic domain structures in a field-free environment.

The TEM is also capable of forming a focused electron probe, as small as 20 Å, which can be positioned on very fine features in the sample for microdiffraction information or analysis of x-rays for compositional information. The latter is the same signal as that used for EMPA and SEM composition analysis (see EMPA facility), where the resolution is on the order of one micron due to beam spreading in the bulk sample. The spatial resolution for this compositional analysis in TEM is much higher, on the order of the probe size, because the sample is so thin. Conversely the signal is much smaller and therefore less quantitative. The high brightness field-emission gun improves the sensitivity and resolution of x-ray compositional analysis over that available with more traditional thermionic sources.

Restrictions on Samples:

Sample preparation for TEM generally requires more time and experience than for most other characterization techniques. A TEM specimen must be approximately 1000 Å or less in thickness in the area of interest. The entire specimen must fit into a 3mm diameter cup and be less than about 100 microns in thickness. A thin, disc shaped sample with a hole in the middle, the edges of the hole being thin enough for TEM viewing, is typical. The initial disk is usually formed by cutting and grinding from bulk or thin film/substrate material, and the final thinning done by ion milling. Other specimen preparation possibilities include direct deposition onto a TEM-thin substrate (Si3N4, carbon); direct dispersion of powders on such a substrate; grinding and polishing using special devices (t-tool, tripod); chemical etching and electropolishing; lithographic patterning of walls and pillars for cross-section viewing; and focused ion beam (FIB) sectioning for site specific samples.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 46: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 46

Artifacts are common in TEM samples, due both to the thinning process and to changing the form of the original material. For example surface oxide films may be introduced during ion milling and the strain state of a thin film may change if the substrate is removed. Most artifacts can either be minimized by appropriate preparation techniques or be systematically identified and separated from real information.

Sample preparation

Sample preparation in TEM can be a complex procedure. TEM specimens are required to be at most hundreds of nanometers thick, as unlike neutron or X-Ray radiation the electron beam interacts readily with the sample, an effect that increases roughly with atomic number squared (z2).[14] High quality samples will have a thickness that is comparable to the mean free path of the electrons that travel through the samples, which may be only a few tens of nanometers. Preparation of TEM specimens is specific to the material under analysis and the desired information to obtain from the specimen. As such, many generic techniques have been used for the preparation of the required thin sections.

Materials that have dimensions small enough to be electron transparent, such as powders or nanotubes, can be quickly prepared by the deposition of a dilute sample containing the specimen onto support grids or films. In the biological sciences in order to withstand the instrument vacuum and facilitate handling, biological specimens can be fixated using either a negative staining material such as uranyl acetate or by plastic embedding. Alternately samples may be held at liquid nitrogen temperatures after embedding in vitreous ice.[35] In material science and metallurgy the specimens tend to be naturally resistant to vacuum, but still must be prepared as a thin foil, or etched so some portion of the specimen is thin enough for the beam to penetrate. Constraints on the thickness of the material may be limited by the scattering cross-section of the atoms from which the material is comprised.

Tissue sectioning

By passing samples over a glass or diamond edge, small, thin sections can be readily obtained using a semi-automated method.[36] This method is used to obtain thin, minimally deformed samples that allow for the observation of tissue samples. Additionally inorganic samples have been studied, such as aluminium, although this usage is limited owing to the heavy damage induced in the less soft samples. [37] To prevent charge build-up at the sample surface, tissue samples need to be coated with a thin layer of conducting material, such as carbon, where the coating thickness is several nanometers. This may be achieved via an electric arc deposition process using a sputter coating device.

Sample staining

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 47: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 47

Details in light microscope samples can be enhanced by stains that absorb light; similarly TEM samples of biological tissues can utilize high atomic number stains to enhance contrast. The stain absorbs electrons or scatters part of the electron beam which otherwise is projected onto the imaging system. Compounds of heavy metals such as osmium, lead, or uranium may be used prior to TEM observation to selectively deposit electron dense atoms in or on the sample in desired cellular or protein regions, requiring an understanding of how heavy metals bind to biological tissues.

Mechanical milling

Mechanical polishing may be used to prepare samples. Polishing needs to be done to a high quality, to ensure constant sample thickness across the region of interest. A diamond, or cubic boron nitride polishing compound may be used in the final stages of polishing to remove any scratches that may cause contrast fluctuations due to varying sample thickness. Even after careful mechanical milling, additional fine methods such as ion etching may be required to perform final stage thinning.

Chemical etching

Certain samples may be prepared by chemical etching, particularly metallic specimens. These samples are thinned using a chemical etchant, such as an acid, to prepare the sample for TEM observation. Devices to control the thinning process may allow the operator to control either the voltage or current passing through the specimen, and may include systems to detect when the sample has been thinned to a sufficient level of optical transparency.

Ion etching

Ion etching is a sputtering process that can remove very fine quantities of material. This is used to perform a finishing polish of specimens polished by other means. Ion etching uses an inert gas passed through an electric field to generate a plasma stream that is directed to the sample surface. Acceleration energies for gases such as argon are typically a few kilovolts. The sample may be rotated to promote even polishing of the sample surface. The sputtering rate of such methods is on the order of tens of micrometers per hour, limiting the method to only extremely fine polishing.

More recently focussed ion beam methods have been used to prepare samples. FIB is a relatively new technique to prepare thin samples for TEM examination from larger specimens. Because FIB can be used to micro-machine samples very precisely, it is possible to mill very thin membranes from a specific area of interest in a sample, such as a semiconductor or metal. Unlike inert gas ion sputtering, FIB makes use of significantly more energetic gallium ions and may alter the composition or structure of the material through gallium implantation.[38]

Selected area (electron) diffraction:

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 48: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 48

Selected area (electron) diffraction (abbreviated as SAD or SAED), is a crystallographic experimental technique that can be performed inside a transmission electron microscope (TEM).

In a TEM, a thin crystalline specimen is subjected to a parallel beam of high-energy electrons. As TEM specimens are typically ~100 nm thick, and the electrons typically have an energy of 100-400 kiloelectron volts, the electrons pass through the sample easily. In this case, electrons are treated as wave-like, rather than particle-like (see wave-particle duality). Because the wavelength of high-energy electrons is a fraction of a nanometer, and the spacings between atoms in a solid is only slightly larger, the atoms act as a diffraction grating to the electrons, which are diffracted. That is, some fraction of them will be scattered to particular angles, determined by the crystal structure of the sample, while others continue to pass through the sample without deflection.

As a result, the image on the screen of the TEM will be a series of spots—the selected area diffraction pattern, SADP, each spot corresponding to a satisfied diffraction condition of the sample's crystal structure. If the sample is tilted, the same crystal will stay under illumination, but different diffraction conditions will be activated, and different diffraction spots will appear or disappear.

Fig. : SADP of a single austenite crystal in a piece of steel

SAD is referred to as "selected" because the user can easily choose from which part of the specimen to obtain the diffraction pattern. Located below the sample holder on the TEM column is a selected area aperture, which can be inserted into the beam path. This is a thin strip of metal that will block the beam. It contains several different sized holes, and can be moved by the user. The effect is to block all of the electron beam except for the small fraction passing through one of the holes; by moving the aperture hole to the section of the sample the user wishes to examine, this particular area is selected by the

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 49: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 49

aperture, and only this section will contribute to the SADP on the screen. This is important, for example, in polycrystalline specimens. If more than one crystal contributes to the SADP, it can be difficult or impossible to analyze. As such, it is useful to select a single crystal for analysis at a time. It may also be useful to select two crystals at a time, in order to examine the crystallographic orientation between them.

As a diffraction technique, SAD can be used to identify crystal structures and examine crystal defects. It is similar to x-ray diffraction, but unique in that areas as small as several hundred nanometers in size can be examined, whereas x-ray diffraction typically samples areas several centimeters in size.

A diffraction pattern is made under broad, parallel electron illumination. An aperture in the image plane is used to select the diffracted region of the specimen, giving site-selective diffraction analysis. SAD patterns are a projection of the reciprocal lattice, with lattice reflections showing as sharp diffraction spots. By tilting a crystalline sample to low-index zone axes, SAD patterns can be used to identify crystal structures and measure lattice parameters. SAD is essential for setting up DF imaging conditions. Other uses of SAD include analysis of: lattice matching; interfaces; twinning and certain crystalline defects [1].

SAD is used primarily in material science and solid state physics, and is one of the most commonly used experimental techniques in those fields.

Scanning electron microscope

The scanning electron microscope (SEM) is a type of electron microscope that images the sample surface by scanning it with a high-energy beam of electrons in a raster scan pattern. The electrons interact with the atoms that make up the sample producing signals that contain information about the sample's surface topography, composition and other properties such as electrical conductivity.

Scanning process and image formation

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 50: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 50

Fig. : Schematic diagram of an SEM.

In a typical SEM, an electron beam is thermionically emitted from an electron gun fitted with a tungsten filament cathode. Tungsten is normally used in thermionic electron guns because it has the highest melting point and lowest vapour pressure of all metals, thereby allowing it to be heated for electron emission, and because of its low cost. Other types of electron emitters include lanthanum hexaboride (LaB6) cathodes, which can be used in a standard tungsten filament SEM if the vacuum system is upgraded and field emission guns (FEG), which may be of the cold-cathode type using tungsten single crystal emitters or the thermally-assisted Schottky type, using emitters of zirconium oxide.

The electron beam, which typically has an energy ranging from 0.5 keV to 40 keV, is focused by one or two condenser lenses to a spot about 0.4 nm to 5 nm in diameter. The beam passes through pairs of scanning coils or pairs of deflector plates in the electron column, typically in the final lens, which deflect the beam in the x and y axes so that it scans in a raster fashion over a rectangular area of the sample surface.

When the primary electron beam interacts with the sample, the electrons lose energy by repeated random scattering and absorption within a teardrop-shaped volume of the specimen known as the interaction volume, which extends from less than 100 nm to around 5 µm into the surface. The size of the interaction volume depends on the electron's landing energy, the atomic number of the specimen and the specimen's density. The energy exchange between the electron beam and the sample results in the reflection of high-energy electrons by elastic scattering, emission of secondary electrons by inelastic scattering and the emission of electromagnetic radiation, each of which can be

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 51: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 51

detected by specialized detectors. The beam current absorbed by the specimen can also be detected and used to create images of the distribution of specimen current. Electronic amplifiers of various types are used to amplify the signals which are displayed as variations in brightness on a cathode ray tube. The raster scanning of the CRT display is synchronised with that of the beam on the specimen in the microscope, and the resulting image is therefore a distribution map of the intensity of the signal being emitted from the scanned area of the specimen. The image may be captured by photography from a high resolution cathode ray tube, but in modern machines is digitally captured and displayed on a computer monitor and saved to a computer's hard disk.

Sample preparation

All samples must also be of an appropriate size to fit in the specimen chamber and are generally mounted rigidly on a specimen holder called a specimen stub. Several models of SEM can examine any part of a 6-inch (15 cm) semiconductor wafer, and some can tilt an object of that size to 45°.

For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge at the surface. Metal objects require little special preparation for SEM except for cleaning and mounting on a specimen stub. Nonconductive specimens tend to charge when scanned by the electron beam, and especially in secondary electron imaging mode, this causes scanning faults and other image artifacts. They are therefore usually coated with an ultrathin coating of electrically-conducting material, commonly gold, deposited on the sample either by low vacuum sputter coating or by high vacuum evaporation. Conductive materials in current use for specimen coating include gold, gold/palladium alloy, platinum, osmium,[5] iridium, tungsten, chromium and graphite. Coating prevents the accumulation of static electric charge on the specimen during electron irradiation.

Two reasons for coating, even when there is enough specimen conductivity to prevent charging, are to increase signal and surface resolution, especially with samples of low atomic number (Z). The improvement in resolution arises because backscattering and secondary electron emission near the surface are enhanced and thus an image of the surface is formed.

An alternative to coating for some biological samples is to increase the bulk conductivity of the material by impregnation with osmium using variants of the OTO staining method (O-osmium, T-thiocarbohydrazide, O-osmium).[6][7] Nonconducting specimens may be imaged uncoated using specialized SEM instrumentation such as the "Environmental SEM" (ESEM) or field emission gun (FEG) SEMs operated at low voltage. Environmental SEM instruments place the specimen in a relatively high pressure chamber where the working distance is short and the electron optical column is differentially pumped to keep vacuum adequately low at the electron gun. The high pressure region around the sample in the ESEM neutralizes charge and provides an

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 52: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 52

amplification of the secondary electron signal. Low voltage SEM of non-conducting specimens can be operationally difficult to accomplish in a conventional SEM and is typically a research application for specimens that are sensitive to the process of applying conductive coatings. Low-voltage SEM is typically conducted in an FEG-SEM because the FEG is capable of producing high primary electron brightness even at low accelerating potentials. Operating conditions must be adjusted such that the local space charge is at or near neutral with adequate low voltage secondary electrons being available to neutralize any positively charged surface sites. This requires that the primary electron beam's potential and current be tuned to the characteristics of the sample specimen.

Embedding in a resin with further polishing to a mirror-like finish can be used for both biological and materials specimens when imaging in backscattered electrons or when doing quantitative X-ray microanalysis.

Detection of secondary electrons

The most common imaging mode collects low-energy (<50 eV) secondary electrons that are ejected from the k-orbitals of the specimen atoms by inelastic scattering interactions with beam electrons. Due to their low energy, these electrons originate within a few nanometers from the sample surface.[17] The electrons are detected by an Everhart-Thornley detector [18] which is a type of scintillator-photomultiplier system. The secondary electrons are first collected by attracting them towards an electrically-biased grid at about +400 V, and then further accelerated towards a phosphor or scintillator positively biased to about +2,000 V. The accelerated secondary electrons are now sufficiently energetic to cause the scintillator to emit flashes of light (cathodoluminescence) which are conducted to a photomultiplier outside the SEM column via a light pipe and a window in the wall of the specimen chamber. The amplified electrical signal output by the photomultiplier is displayed as a two-dimensional intensity distribution that can be viewed and photographed on an analogue video display, or subjected to analog-to-digital conversion and displayed and saved as a digital image. This process relies on a raster-scanned primary beam. The brightness of the signal depends on the number of secondary electrons reaching the detector. If the beam enters the sample perpendicular to the surface, then the activated region is uniform about the axis of the beam and a certain number of electrons "escape" from within the sample. As the angle of incidence increases, the "escape" distance of one side of the beam will decrease, and more secondary electrons will be emitted. Thus steep surfaces and edges tend to be brighter than flat surfaces, which results in images with a well-defined, three-dimensional appearance. Using this technique, image resolution less than 0.5 nm is possible.

Detection of backscattered electrons

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 53: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 53

Fig. ; Comparison of SEM techniques:Top: backscattered electron analysis - compositionBottom: secondary electron analysis - topography

Backscattered electrons (BSE) consist of high-energy electrons originating in the electron beam, that are reflected or back-scattered out of the specimen interaction volume by elastic scattering interactions with specimen atoms. Since heavy elements (high atomic number) backscatter electrons more strongly than light elements (low atomic number), and thus appear brighter in the image, BSE are used to detect contrast between areas with different chemical compositions.[17] The Everhart-Thornley detector, which is normally positioned to one side of the specimen, is inefficient for the detection of backscattered electrons because few such electrons are emitted in the solid angle subtended by the detector, and because the positively biased detection grid has little ability to attract the higher energy BSE electrons. Dedicated backscattered electron detectors are positioned above the sample in a "doughnut" type arrangement, concentric with the electron beam, maximising the solid angle of collection. BSE detectors are usually either of scintillator or semiconductor types. When all parts of the detector are used to collect electrons symmetrically about the beam, atomic number contrast is produced. However, strong topographic contrast is produced by collecting back-scattered electrons from one side above the specimen using an asymmetrical, directional BSE detector; the resulting contrast appears as illumination of the topography from that side. Semiconductor detectors can be made in radial segments that can be switched in or out to control the type of contrast produced and its directionality.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 54: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 54

Backscattered electrons can also be used to form an electron backscatter diffraction (EBSD) image that can be used to determine the crystallographic structure of the specimen.

Cathodoluminescence

Cathodoluminescence, the emission of light when atoms excited by high-energy electrons return to their ground state, is analogous to UV-induced fluorescence, and some materials such as zinc sulfide and some fluorescent dyes, exhibit both phenomena. Cathodoluminescence is most commonly experienced in everyday life as the light emission from the inner surface of the cathode ray tube in television sets and computer CRT monitors. In the SEM, CL detectors either collect all light emitted by the specimen, or can analyse the wavelengths emitted by the specimen and display an emission spectrum or an image of the distribution of cathodoluminescence emitted by the specimen in real colour.

Everhart-Thornley detector

The Everhart-Thornley Detector (E-T detector or ET detector) is a secondary electron detector used in scanning electron microscopes (SEMs). It is named after its designers, Thomas E. Everhart and Richard F. M. Thornley who in 1960 published their design to increase the efficiency of existing secondary electron detectors by adding a light pipe to carry the photon signal from the scintillator inside the evacuated specimen chamber of the SEM to the photomultiplier outside the chamber. [1] Prior to this Everhart had improved a design for a secondary electron detection by Vladimir Zworykin and J. A. Rajchman by changing the electron multiplier to a photomultiplier. The Everhart-Thornley Detector with its lightguide and highly efficient photomultiplier is the most frequently used detector in SEMs.

The detector consists primarily of a scintillator inside a Faraday cage inside the specimen chamber of the microscope. A low positive voltage is applied to the Faraday cage to attract the relatively low energy (less than 50 eVs by definition) secondary electrons. Other electrons within the specimen chamber are not attracted by this low voltage and will only reach the detector if their direction of travel takes them to it. The scintillator has a high positive voltage (in the nature of 10,000 eVs) to accelerate the incoming electrons to it where they can be converted to light photons. The direction of their travel is focused to the lightguide by a metal coating on the scintillator acting as a mirror. In the light pipe the photons travel outside of the microscope's vacuum chamber to a photomultiplier tube for amplification.

The E-T secondary electron detector can be used in the SEM's back-scattered electron mode by either turning off the Faraday cage or by applying a negative voltage to the Faraday cage. However, better back-scattered electron images come from dedicated BSE detectors rather than from using the E-T detector as a BSE detector.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 55: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 55

Energy-dispersive X-ray spectroscopy

Energy-dispersive X-ray spectroscopy (EDS or EDX) is an analytical technique used for the elemental analysis or chemical characterization of a sample. It is one of the variants of X-ray fluorescence spectroscopy which relies on the investigation of a sample through interactions between electromagnetic radiation and matter, analyzing X-rays emitted by the matter in response to being hit with charged particles. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing X-rays that are characteristic of an element's atomic structure to be identified uniquely from one another.

To stimulate the emission of characteristic X-rays from a specimen, a high-energy beam of charged particles such as electrons or protons (see PIXE), or a beam of X-rays, is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy-dispersive spectrometer. As the energy of the X-rays are characteristic of the difference in energy between the two shells, and of the atomic structure of the element from which they were emitted, this allows the elemental composition of the specimen to be measured.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 56: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 56

Fig.: EDS spectrum of the mineral crust of Rimicaris exoculata[1]

Equipment: the connection with SEM

There are four primary components of the EDS setup: the beam source; the X-ray detector; the pulse processor; and the analyzer. A number of free-standing EDS systems exist. However, EDS systems are most commonly found on scanning electron microscopes (SEM-EDS) and electron microprobes. Scanning electron microscopes are equipped with a cathode and magnetic lenses to create and focus a beam of electrons, and since the 1960s they have been equipped with elemental analysis capabilities. A detector is used to convert X-ray energy into voltage signals; this information is sent to a pulse processor, which measures the signals and passes them onto an analyzer for data display and analysis.

Technological variants

Principle of EDS

The excess energy of the electron that migrates to an inner shell to fill the newly-created hole can do more than emit an X-ray. Often, instead of X-ray emission, the excess energy is transferred to a third electron from a further outer shell, prompting its ejection. This ejected species is called an Auger electron, and the method for its analysis is known as Auger electron spectroscopy (AES). X-ray photoelectron spectroscopy (XPS) is another

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 57: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 57

close relative of EDS, utilizing ejected electrons in a manner similar to that of AES. Information on the quantity and kinetic energy of ejected electrons is used to determine the binding energy of these now-liberated electrons, which is element-specific and allows chemical characterization of a sample. EDS is often contrasted with its spectroscopic counterpart, WDS (wavelength dispersive X-ray spectroscopy). WDS differs from EDS in that it uses the diffraction patterns created by light-matter interaction as its raw data. WDS has a much finer spectral resolution than EDS. WDS also avoids the problems associated with artifacts in EDS (false peaks, noise from the amplifiers and microphonics. In WDS, only one element can be analyzed at a time, while EDS gathers a spectrum of all elements, within limits, of a sample.

Accuracy of EDS

Accuracy of EDS spectrum can be affected by many factors. Windows in front of the detector can absorb low-energy X-rays (i.e. EDS detectors cannot detect elements with atomic number less than 4, that is H, He, and Li)[2]. Over-voltage settings in EDS alter the peak sizes – raising over-voltage on the SEM shifts the spectrum to the larger energies, making higher-energy peaks larger and lower-energy peaks smaller. Also many elements will have overlapping peaks (e.g., Ti Kβ and V Kα, Mn Kβ and Fe Kα).[3] The accuracy of the spectrum can also be affected by the nature of the sample. X-rays can be generated by any atom in the sample that is sufficiently excited by the incoming beam. These X-rays are emitted in any direction, and so they may not all escape the sample. The likelihood of an X-ray escaping the specimen, and thus being available to detect and measure, depends on the energy of the X-ray and the amount and density of material it has to pass through. This can result in reduced accuracy in inhomogeneous and rough samples.

Emerging technology

There is a trend towards a newer EDS detector, called the silicon drift detector (SDD). The SDD consists of a high-resistivity silicon chip where electrons are driven to a small collecting anode. The advantage lies in the extremely low capacitance of this anode, thereby utilizing shorter processing times and allowing very high throughput. Benefits of the SDD include:

1. High count rates and processing,2. Better resolution than traditional Si(Li) detectors at high count rates,3. Lower dead time (time spent on processing X-ray event),4. Faster analytical capabilities and more precise X-ray maps or particle data

collected in seconds,5. Ability to be stored and operated at relatively high temperatures, eliminating the

need for liquid nitrogen cooling.

Because the capacitance of the SDD chip is independent of the active area of the detector, much larger SDD chips can be utilized (40 mm2 or more). This allows for even higher count rate collection. Further benefits of large area chips include

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 58: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 58

1. Minimizing SEM beam current allowing for optimization of imaging under analytical conditions,

2. Reduced sample damage and3. Smaller beam interaction and improved spatial resolution for high speed maps.

In recent years, a different type of EDS detector, based upon a microcalorimeter, has become commercially available. This new model allegedly has the simultaneous detection capabilities of EDS as well as the high spectral resolution of WDS. The EDS microcalorimeter relies highly on two components: an absorber, and a thermistor. The former absorbs X-rays emitted from the sample and converts this energy into heat; the latter measures the subsequent change in temperature due to the influx of heat (in essence, a thermometer). The EDS microcalorimeter has suffered from a number of drawbacks; including low count rates, poor collection efficiencies and small detector areas. The count rate is hampered by its reliance on the time constant of the calorimeter’s electrical circuit. The collection efficiency is a function of the absorber material and remains to be optimized. The detector area must be small in order to keep the heat capacity as small as possible and maximize thermal sensitivity (resolution). Innovative engineering solutions are necessary for further improvement of spectroscopic microanalysis.

Electron microprobe

An electron microprobe (EMP), also known as an electron probe microanalyzer (EPMA) or electron micro probe analyzer (EMPA), is an analytical tool used to non-destructively determine the chemical composition of small volumes of solid materials. It works similarly to a scanning electron microscope: the sample is bombarded with an electron beam, emitting x-rays at wavelengths characteristic to the elements being analyzed. This enables the abundances of elements present within small sample volumes (typically 10-30 cubic micrometers or less) to be determined.[1] The concentrations of elements from boron to plutonium can be measured at levels as low as 100 parts per million (ppm). Recent improvements on EMPAs (e.g. the Cameca SX100 with five over-sized PET crystals for trace element analysis) can accurately measure elemental concentrations of approximately 10 ppm

How it works

A beam of electrons is fired at a sample. The beam causes each element in the sample to emit X-rays at a characteristic frequency; the X-rays can then be detected by the electron microprobe.[2] The size of the electron beam determines the trade-off between resolution and scan time.[3]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 59: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 59

Detailed description

Low-energy electrons are produced from a tungsten filament or lanthanum hexaboride crystal cathode and accelerated by a positively biased anode plate to 10 to 30 thousand electron volts (keV). The anode plate has central aperture and electrons that pass through it are collimated and focused by a series of magnetic lenses and apertures. The resulting approximately 1 micrometre diameter electron beam may be rastered across the sample or used in spot mode to excite various effects from the sample. Among these effects are: phonon excitation (heat), cathodoluminescence (visible light fluorescence), continuum X-ray radiation (bremsstrahlung), characteristic X-ray radiation, secondary electrons (plasmon production), backscattered electron production, and Auger electron production.

Fig.: A Cambridge Scientific Instrument Company "Microscan" electron probe microanalyzer based on a design by David Wittry. This model is housed at the Cambridge Museum of Technology

The characteristic X-rays are used for chemical analysis. Specific X-ray wavelengths are selected and counted, either by wave-length dispersive spectrometry (WDS) or energy dispersive X-ray spectroscopy (EDS). WDS utilizes Bragg diffraction from crystals to select X-ray wavelengths of interest and direct them to gas-flow or sealed proportional detectors. In contrast, EDS uses a solid state semiconductor detector to accumulate X-

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 60: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 60

rays of all wavelengths produced from the sample. While EDS yields more information and typically requires a much shorter counting time, WDS is the more precise technique because its superior X-ray peak resolution.

Chemical composition is determined by comparing the intensities of characteristic X-rays from the sample material with intensities from known composition (standards). Count from the sample must be corrected for matrix effects (absorption and secondary fluorescence) to yield chemical compositions. The resulting chemical information is gathered in textural context. Variations in chemical composition within a material (zoning), such as a mineral grain or metal, can be readily determined.

Uses

Material sciences and engineering

The technique is commonly used for analyzing the chemical composition of metals, alloys, ceramics, and glasses. It is particularly useful for assessing the composition of individual particles or grains and chemical changes on the scale of a few micrometres to millimeters. The electron microprobe is widely used for research, quality control, and failure analysis.

Mineralogy and Petrology

This technique is most commonly used by mineralogists and petrologists. Most rocks are aggregates of small mineral grains. These grains may preserve chemical information adopted during their formation and subsequent alteration. This information may illuminate geologic processes, such as crystallization, lithification, volcanism, metamorphism, orogenic events (mountain building), plate tectonics. This technique is also used for the study of extraterrestrial rocks (i.e. meteorites), and provides chemical data which is vital to understanding the evolution of the planets, asteroids, and comets.

The change in elemental composition from the center (also known as core) to the edge (or rim) of a mineral can yield information about the history of the crystal's formation, including the temperature, pressure, and chemistry of the surrounding medium. Quartz crystals, for example, incorporate a small, but measurable amount of titanium into their structure as a function of temperature, pressure, and the amount of titanium available in their environment. Changes in these parameters are recorded by titanium as the crystal grows.

Palaeontology

In exceptionally preserved fossils, such as those of the Burgess shale, soft parts of organisms may be preserved. Since these fossils are often compressed into a 2D film, it can be difficult to deduce what features were what: a famous example is that of triangular extensions in Opabinia, which were interpreted as either legs or extensions of the gut. Elemental mapping showed that they had a similar composition to the gut, favouring the

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 61: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 61

second interpretation.[4] Because of the thin nature of the carbon films, only low voltages (5-15V) can be used in such specimens.[5]

Scanning tunneling microscope

A scanning tunneling microscope (STM) is an instrument for imaging surfaces at the atomic level. Its development in 1981 earned its inventors, Gerd Binnig and Heinrich Rohrer (at IBM Zürich), the Nobel Prize in Physics in 1986.[1][2] For an STM, good resolution is considered to be 0.1 nm lateral resolution and 0.01 nm depth resolution.[3]

With this resolution, individual atoms within materials are routinely imaged and manipulated. The STM can be used not only in ultra high vacuum but also in air, water, and various other liquid or gas ambients, and at temperatures ranging from near zero kelvin to a few hundred degrees Celsius.[4]

The STM is based on the concept of quantum tunneling. When a conducting tip is brought very near to the surface to be examined, a bias (voltage difference) applied between the two can allow electrons to tunnel through the vacuum between them. The resulting tunneling current is a function of tip position, applied voltage, and the local density of states (LDOS) of the sample.[4] Information is acquired by monitoring the current as the tip's position scans across the surface, and is usually displayed in image form. STM can be a challenging technique, as it requires extremely clean and stable surfaces, sharp tips, excellent vibration control, and sophisticated electronics.

Procedure

First, a voltage bias is applied and the tip is brought close to the sample by some coarse sample-to-tip control, which is turned off when the tip and sample are sufficiently close. At close range, fine control of the tip in all three dimensions when near the sample is typically piezoelectric, maintaining tip-sample separation W typically in the 4-7 Å range, which is the equilibrium position between attractive (3<W<10Å) and repulsive (W<3Å) interactions[4]. In this situation, the voltage bias will cause electrons to tunnel between the tip and sample, creating a current that can be measured. Once tunneling is established, the tip's bias and position with respect to the sample can be varied (with the details of this variation depending on the experiment) and data is obtained from the resulting changes in current.

If the tip is moved across the sample in the x-y plane, the changes in surface height and density of states cause changes in current. These changes are mapped in images. This change in current with respect to position can be measured itself, or the height, z, of the tip corresponding to a constant current can be measured[4]. These two modes are called constant height mode and constant current mode, respectively. In constant current mode,

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 62: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 62

feedback electronics adjust the height by a voltage to the piezoelectric height control mechanism[5]. This leads to a height variation and thus the image comes from the tip topography across the sample and gives a constant charge density surface; this means contrast on the image is due to variations in charge density [6]. In constant height mode, the voltage and height are both held constant while the current changes to keep the voltage from changing; this leads to an image made of current changes over the surface, which can be related to charge density[6]. The benefit to using a constant height mode is that it is faster, as the piezoelectric movements require more time to register the change in constant current mode than the voltage response in constant height mode[6]. All images produced by STM are grayscale, with color optionally added in post-processing in order to visually emphasize important features.

In addition to scanning across the sample, information on the electronic structure at a given location in the sample can be obtained by sweeping voltage and measuring current at a specific location[3]. This type of measurement is called scanning tunneling spectroscopy (STS) and typically results in a plot of the local density of states as a function of energy within the sample. The advantage of STM over other measurements of the density of states lies in its ability to make extremely local measurements: for example, the density of states at an impurity site can be compared to the density of states far from impurities.[7]

Framerates of at least 1 Hz enable so called Video-STM (up to 50 Hz is possible).[8][9]

This can be used to scan surface diffusion.[10]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 63: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 63

Instrumentation

Fig. : Schematic view of an STM

The components of an STM include scanning tip, piezoelectric controlled height and x,y scanner, coarse sample-to-tip control, vibration isolation system, and computer[5].

The resolution of an image is limited by the radius of curvature of the scanning tip of the STM. Additionally, image artifacts can occur if the tip has two tips at the end rather than a single atom; this leads to “double-tip imaging,” a situation in which both tips contribute to the tunneling[3]. Therefore it has been essential to develop processes for consistently obtaining sharp, usable tips. Recently, carbon nanotubes have been used in this instance.[11]

The tip is often made of tungsten or platinum-iridium, though gold is also used[3]. Tungsten tips are usually made by electrochemical etching, and platinum-iridium tips by mechanical shearing[3].

Due to the extreme sensitivity of tunnel current to height, proper vibration isolation or an extremely rigid STM body is imperative for obtaining usable results. In the first STM by Binnig and Rohrer, magnetic levitation was used to keep the STM free from vibrations; now mechanical spring or gas spring systems are often used[4]. Additionally, mechanisms for reducing eddy currents are sometimes implemented.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 64: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 64

Maintaining the tip position with respect to the sample, scanning the sample and acquiring the data is computer controlled[5]. The computer may also be used for enhancing the image with the help of image processing [12] [13] as well as performing quantitative measurements.[14]

Atomic force microscopy

Atomic force microscopy (AFM) or scanning force microscopy (SFM) is a very high-resolution type of scanning probe microscopy, with demonstrated resolution on the order of fractions of a nanometer, more than 1000 times better than the optical diffraction limit. The precursor to the AFM, the scanning tunneling microscope, was developed by Gerd Binnig and Heinrich Rohrer in the early 1980s at IBM Research - Zurich, a development that earned them the Nobel Prize for Physics in 1986. Binnig, Quate and Gerber invented the first atomic force microscope (also abbreviated as AFM) in 1986. The first commercially available atomic force microscope was introduced in 1989. The AFM is one of the foremost tools for imaging, measuring, and manipulating matter at the nanoscale. The information is gathered by "feeling" the surface with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on (electronic) command enable the very precise scanning. In some variations, electric potentials can also be scanned using conducting cantilevers. In newer more advanced versions, currents can even be passed through the tip to probe the electrical conductivity or transport of the underlying surface, but this is much more challenging with very few research groups reporting reliable data.

Fig. : Block diagram of atomic force microscopeCompiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,

St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 65: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 65

Basic principles

Electron micrograph of a used AFM cantilever image width ~100 micrometers...

and ~30 micrometers

The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers. When the tip is brought into proximity of a sample surface, forces between the tip and the sample lead to a deflection of the cantilever according to Hooke's law. Depending on the situation, forces that are measured in AFM include mechanical contact force, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic force microscope, MFM), Casimir forces, solvation forces, etc. Along with force, additional quantities may simultaneously be measured through the use of specialized types of probe (see scanning thermal microscopy, scanning joule expansion microscopy, photothermal microspectroscopy, etc.). Typically, the deflection is measured using a laser spot reflected from the top surface of the cantilever into an array of photodiodes. Other methods that are used include optical interferometry, capacitive sensing or piezoresistive AFM cantilevers. These cantilevers are fabricated with piezoresistive elements that act as a strain gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to deflection can be measured, but this method is not as sensitive as laser deflection or interferometry.

If the tip was scanned at a constant height, a risk would exist that the tip collides with the surface, causing damage. Hence, in most cases a feedback mechanism is employed to adjust the tip-to-sample distance to maintain a constant force between the tip and the sample. Traditionally, the sample is mounted on a piezoelectric tube, that can move the sample in the z direction for maintaining a constant force, and the x and y directions for scanning the sample. Alternatively a 'tripod' configuration of three piezo crystals may be employed, with each responsible for scanning in the x,y and z directions. This eliminates

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 66: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 66

some of the distortion effects seen with a tube scanner. In newer designs, the tip is mounted on a vertical piezo scanner while the sample is being scanned in X and Y using another piezo block. The resulting map of the area s = f(x,y) represents the topography of the sample.

The AFM can be operated in a number of modes, depending on the application. In general, possible imaging modes are divided into static (also called contact) modes and a variety of dynamic (or non-contact) modes where the cantilever is vibrated.

Imaging modes

The primary modes of operation for an AFM are static mode and dynamic mode. In static mode, the cantilever is "dragged" across the surface of the sample and the contours of the surface are measured directly using the deflection of the cantilever. In the dynamic mode, the cantilever is externally oscillated at or close to its fundamental resonance frequency or a harmonic. The oscillation amplitude, phase and resonance frequency are modified by tip-sample interaction forces. These changes in oscillation with respect to the external reference oscillation provide information about the sample's characteristics.

Contact mode

In the static mode operation, the static tip deflection is used as a feedback signal. Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers are used to boost the deflection signal. However, close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in" to the surface. Thus static mode AFM is almost always done in contact where the overall force is repulsive. Consequently, this technique is typically called "contact mode". In contact mode, the force between the tip and the surface is kept constant during scanning by maintaining a constant deflection.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 67: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 67

Non-contact mode

Fig. : AFM - non-contact mode

In this mode, the tip of the cantilever does not contact the sample surface. The cantilever is instead oscillated at a frequency slightly above its resonance frequency where the amplitude of oscillation is typically a few nanometers (<10 nm). The van der Waals forces, which are strongest from 1 nm to 10 nm above the surface, or any other long range force which extends above the surface acts to decrease the resonance frequency of the cantilever. This decrease in resonance frequency combined with the feedback loop system maintains a constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance. Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to construct a topographic image of the sample surface.

Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for measuring soft samples. In the case of rigid samples, contact and non-contact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will penetrate the liquid layer to image the underlying surface, whereas in non-contact mode an AFM will oscillate above the adsorbed fluid layer to image both the liquid and surface.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 68: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 68

Schemes for dynamic mode operation include frequency modulation and the more common amplitude modulation. In frequency modulation, changes in the oscillation frequency provide information about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide stability very close to the surface and, as a result, this technique was the first AFM technique to provide true atomic resolution in ultra-high vacuum conditions.[1]

In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for imaging. In amplitude modulation, changes in the phase of oscillation can be used to discriminate between different types of materials on the surface. Amplitude modulation can be operated either in the non-contact or in the intermittent contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation distance between the cantilever tip and the sample surface is modulated.

Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using very stiff cantilevers and small amplitudes in an ultra-high vacuum environment.

Tapping mode

Fig. : Single polymer chains (0.4 nm thick) recorded in a tapping mode under aqueous media with different pH.[2]

In ambient conditions, most samples develop a liquid meniscus layer. Because of this, keeping the probe tip close enough to the sample for short-range forces to become detectable while preventing the tip from sticking to the surface presents a major problem for non-contact dynamic mode in ambient conditions. Dynamic contact mode (also called intermittent contact or tapping mode) was developed to bypass this problem.[3]

In tapping mode, the cantilever is driven to oscillate up and down at near its resonance frequency by a small piezoelectric element mounted in the AFM tip holder similar to non-contact mode. However, the amplitude of this oscillation is greater than 10 nm,

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 69: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 69

typically 100 to 200 nm. Due to the interaction of forces acting on the cantilever when the tip comes close to the surface, Van der Waals force, dipole-dipole interaction, electrostatic forces, etc. cause the amplitude of this oscillation to decrease as the tip gets closer to the sample. An electronic servo uses the piezoelectric actuator to control the height of the cantilever above the sample. The servo adjusts the height to maintain a set cantilever oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is therefore produced by imaging the force of the intermittent contacts of the tip with the sample surface.

This method of "tapping" lessens the damage done to the surface and the tip compared to the amount done in contact mode. Tapping mode is gentle enough even for the visualization of supported lipid bilayers or adsorbed single polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With proper scanning parameters, the conformation of single molecules can remain unchanged for hours.[2]

Advantages and disadvantages

Just like any other tool, an AFM's usefulness has limitations. When determining whether or not analyzing a sample with an AFM is appropriate, there are various advantages and disadvantages that must be considered.

Advantages

AFM has several advantages over the scanning electron microscope (SEM). Unlike the electron microscope which provides a two-dimensional projection or a two-dimensional image of a sample, the AFM provides a three-dimensional surface profile. Additionally, samples viewed by AFM do not require any special treatments (such as metal/carbon coatings) that would irreversibly change or damage the sample. While an electron microscope needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or even a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution AFM is comparable in resolution to scanning tunneling microscopy and transmission electron microscopy.

Disadvantages

A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size. In one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of millimeters. Whereas the AFM can only image a maximum height on the order of 10-20 micrometers and a maximum scanning area of about 150×150 micrometers. One method of improving the scanned area size for AFM is by using parallel probes in a fashion similar to that of millipede data storage.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 70: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 70

The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as a SEM, requiring several minutes for a typical scan, while a SEM is capable of scanning at near real-time, although at relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to thermal drift in the image [9][10]

making the AFM microscope less suited for measuring accurate distances between topographical features on the image. However, several fast-acting designs [11][12] were suggested to increase microscope scanning productivity including what is being termed videoAFM (reasonable quality images are being obtained with videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced by thermal drift, several methods have been introduced.[9][10]

AFM images can also be affected by hysteresis of the piezoelectric material[13] and cross-talk between the x, y, z axes that may require software enhancement and filtering. Such filtering could "flatten" out real topographical features. However, newer AFMs utilize closed-loop scanners which practically eliminate these problems. Some AFMs also use separated orthogonal scanners (as opposed to a single tube) which also serve to eliminate part of the cross-talk problems.

As with any other imaging technique, there is the possibility of image artifacts, which could be induced by an unsuitable tip, a poor operating environment, or even by the sample itself. These image artifacts are unavoidable however, their occurrence and effect on results can be reduced through various methods.

Due to the nature of AFM probes, they cannot normally measure steep walls or overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at the cost of more expensive cantilevers, lower lateral resolution and additional artifacts.

Unit II & III

Crystal

A crystal or crystalline solid is a solid material whose constituent atoms, molecules, or ions are arranged in an orderly repeating pattern extending in all three spatial dimensions. The scientific study of crystals and crystal formation is known as crystallography. The

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 71: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 71

process of crystal formation via mechanisms of crystal growth is called crystallization or solidification. The word crystal is derived from the Ancient Greek word κρύσταλλος (krustallos), meaning "rock-crystal" but also "ice",[1] from κρύος (kruos), "icy cold, frost".[2][3] The word once referred particularly to quartz, or "rock crystal".

Most metals encountered in everyday life are polycrystals. Crystals are often symmetrically intergrown to form crystal twins.

Crystal structure

Insulin crystals grown in outer space

Halite (sodium chloride) - a single, large crystal.

The process of forming a crystalline structure from a fluid or from materials dissolved in the fluid is often referred to as the crystallization process. In the old example referenced by the root meaning of the word crystal, water being cooled undergoes a phase change from liquid to solid beginning with small ice crystals that grow until they fuse, forming a polycrystalline structure. The physical properties of the ice depend on the size and arrangement of the individual crystals, or grains, and the same may be said of metals solidifying from a molten state.

Which crystal structure the fluid will form depends on the chemistry of the fluid, the conditions under which it is being solidified, and also on the ambient pressure. While the cooling process usually results in the generation of a crystalline material, under certain conditions, the fluid may be frozen in a noncrystalline state. In most cases, this involves cooling the fluid so rapidly that atoms cannot travel to their lattice sites before they lose mobility. A noncrystalline material, which has no long-range order, is called an

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 72: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 72

amorphous, vitreous, or glassy material. It is also often referred to as an amorphous solid, although there are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion.

Crystalline structures occur in all classes of materials, with all types of chemical bonds. Almost all metal exists in a polycrystalline state; amorphous or single-crystal metals must be produced synthetically, often with great difficulty. Ionically bonded crystals can form upon solidification of salts, either from a molten fluid or upon crystallization from a solution. Covalently bonded crystals are also very common, notable examples being diamond, silica, and graphite. Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization. Weak van der Waals forces can also play a role in a crystal structure; for example, this type of bonding loosely holds together the hexagonal-patterned sheets in graphite.

Most crystalline materials have a variety of crystallographic defects. The types and structures of these defects can contain a profound effect on the properties of the materials.

Structure factor (crystallography)

In condensed matter physics and crystallography, the static structure factor (or structure factor for short) is a mathematical description of how a material scatters incident radiation. The structure factor is a particularly useful tool in the interpretation of interference patterns obtained in X-ray, electron and neutron diffraction experiments.

The static structure factor is measured without resolving the energy of scattered photons/electrons/neutrons. Energy-resolved measurements yield the dynamic structure factor.

In crystallography, the cubic (or isometric) crystal system is a crystal system where the unit cell is in the shape of a cube. This is one of the most common and simplest shapes found in crystals and minerals.

There are three main varieties of these crystals, called simple cubic (sc), body-centered cubic (bcc), and face-centered cubic (fcc, also known as cubic close-packed or ccp), plus a number of other variants listed below. Note that although the unit cell in these crystals is conventionally taken to be a cube, the primitive unit cell often is not. This is related to the fact that in most cubic crystal systems, there is more than one atom per cubic unit cell.

Scattering from a crystal

A crystal is a periodic arrangement of atoms in a particular pattern. Each of the atoms may scatter incident radiation such as X-rays, electrons and neutrons. Because of the

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 73: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 73

periodic arrangement of the atoms, the interference of waves scattered from different atoms may cause a distinct pattern of constructive and destructive interference to form. This is the diffraction pattern caused by the crystal.

The structure factor describes the way in which an incident beam is scattered by the atoms of a crystal unit cell, taking into account the different scattering power of the elements through the term fj. Since the atoms are spatially distributed in the unit cell, there will be a difference in phase when considering the scattered amplitude from two atoms. This phase shift is taken into account by the complex exponential term. The atomic form factor, or scattering power, of an element depends on the type of radiation considered. Because electrons interact with matter through different processes than for example X-rays, the atomic form factors for the two cases are not the same.

Miller indices are a notation system in crystallography for planes and directions in crystal (Bravais) lattices.

In particular, a family of lattice planes is determined by three integers ℓ, m, and n, the Miller indices. They are written (hkl), and each index denotes a plane orthogonal to a direction (h, k, l) in the basis of the reciprocal lattice vectors. By convention, negative integers are written with a bar, as in 3 for −3. The integers are usually written in lowest terms, i.e. their greatest common divisor should be 1. Miller index 100 represents a plane orthogonal to direction ℓ; index 010 represents a plane orthogonal to direction m, and index 001 represents a plane orthogonal to n.

Structure factors for specific lattice types

Face-centered cubic (FCC)

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 74: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 74

In the case of a monoatomic FCC crystal, the atoms in the basis are at the origin

with indices (0,0,0) and at the three face centers ,

, with indices given by (1/2,1/2,0), (0,1/2,1/2), (1/2,0,1/2). An argument similar to the one above gives the expression

with the result

The most intense diffraction peak from a material that crystallizes in the FCC structure is typically the (111). Films of FCC materials like gold tend to grow in a (111) orientation with a triangular surface symmetry.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 75: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 75

Dislocation

In materials science, a dislocation is a crystallographic defect or irregularity, within a crystal structure. The presence of dislocations strongly influences many of the properties of materials. The theory was originally developed by Vito Volterra in 1905. Some types of dislocations can be visualized as being caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the surrounding planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. The analogy with a stack of paper is apt: if a half a piece of paper is inserted in a stack of paper, the defect in the stack is only noticeable at the edge of the half sheet.

There are two primary types: edge dislocations and screw dislocations. Mixed dislocations are intermediate between these.

Figure 1: An edge-dislocation (b = Burgers vector)

Mathematically, dislocations are a type of topological defect, sometimes called a soliton. The mathematical theory explains why dislocations behave as stable particles: they can be moved about, but maintain their identity as they move. Two dislocations of opposite orientation, when brought together, can cancel each other (this is the process of annihilation), but a single dislocation typically cannot "disappear" on its own.

Dislocation geometry

Figure A Crystal lattice showing atoms and lattice planes

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 76: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 76

Two main types of dislocation exist: edge and screw. Dislocations found in real materials typically are mixed, meaning that they have characteristics of both.

A crystalline material consists of a regular array of atoms, arranged into lattice planes (imagine stacking oranges in a grocers, each of the trays of oranges are the lattice planes). One approach is to begin by considering a 3-d representation of a perfect crystal lattice, with the atoms represented by spheres. The viewer may then start to simplify the representation by visualising planes of atoms instead of the atoms themselves (Figure A).

Figure B Schematic diagram (lattice planes) showing an edge dislocation. Burgers vector in black, dislocation line in blue.

Edge dislocations

An edge dislocation is a defect where an extra half-plane of atoms is introduced mid way through the crystal, distorting nearby planes of atoms. When enough force is applied from one side of the crystal structure, this extra plane passes through planes of atoms breaking and joining bonds with them until it reaches the grain boundary. A simple schematic diagram of such atomic planes can be used to illustrate lattice defects such as dislocations. (Figure B represents the "extra half-plane" concept of an edge type dislocation). The dislocation has two properties, a line direction, which is the direction running along the bottom of the extra half plane, and the Burgers vector which describes the magnitude and direction of distortion to the lattice. In an edge dislocation, the Burgers vector is perpendicular to the line direction.

The stresses caused by an edge dislocation are complex due to its inherent asymmetry.

Screw dislocations

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 77: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 77

Fig.: Top right: edge dislocation.Bottom right: screw dislocation.

Figure C Schematic diagram (lattice planes) showing a screw dislocation.

A screw dislocation is much harder to visualize. Imagine cutting a crystal along a plane and slipping one half across the other by a lattice vector, the halves will fit back together without leaving a defect. If the cut only goes part way through the crystal, and then slipped, the boundary of the cut is a screw dislocation. It comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes in the crystal lattice (Figure C). Perhaps the closest analogy is a spiral-sliced ham. In pure screw dislocations, the Burgers vector is parallel to the line direction.[citation needed]

Despite the difficulty in visualization, the stresses caused by a screw dislocation are less complex than those of an edge dislocation. These stresses need only one equation, as symmetry allows only one radial coordinate to be used:[1]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 78: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 78

where μ is the shear modulus of the material, b is the Burgers vector, and r is a radial coordinate. This equation suggests a long cylinder of stress radiating outward from the cylinder and decreasing with distance. Please note, this simple model results in an infinite value for the core of the dislocation at r=0 and so it is only valid for stresses outside of the core of the dislocation.[1]

Mixed dislocations

In many materials, dislocations are found where the line direction and Burgers vector are neither perpendicular nor parallel and these dislocations are called mixed dislocations, consisting of both screw and edge character.

Inter-planar spacing distance for a cubic crystal

Consider a cubic crystal having a lattice parameter (the length of the cube edge) of 0.4nm. Calculate the interplanar spacing (that is, the distance between neighboring planes that are parallel to one another) among the (110) planes.

The formula for interplanar spacing is:

d = a / (sqrt (h^2 + k^2 + l^2))

d: interplanar spacinga: lattice parameterh, k, l: plane indices

A crystal is made up of a large number of parallel equidistant planesnpassing through the lattice planes can be chosen in various number of ways. The perpendicular distance between adjacent planes is called interplanar spacing. The set of planes are represented by certain numbers called Miller indices. The distance between the parallel planes whose miller indices are same is called as interplanar distance 'd'.

Larson-Miller parameter

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 79: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 79

The Larson-Miller relation, also widely known as the Larson-Miller Parameter and often abbreviated LMP, is a parametric relation used to extrapolate experimental data on creep and rupture life of engineering materials.

LMP = T * (C + log(t))

where the quantity LMP is known as the Larson-Miller parameter. Using the assumption that activation energy is independent of applied stress, the equation can be used to relate the difference in rupture life to differences in temperature for a given stress. The material constant C is typically found to be in the range of 20 to 22 for metals.

The Larson-Miller model is used for experimental tests so that results at certain temperatures and stresses can predict rupture lives of time spans that would be impractical to reproduce in the laboratory.

The equation was developed during the 1950s while Miller and Larson were employed by GE performing research on turbine blade life.

The Larson-Miller parameter is a means of predicting the lifetime of material vs. time and temperature using a correlative approach based on the Arrhenius rate equation. The value of the parameter is usually expressed as LMP=T(C + log t) where C is a material specific constant often approximated as 20, t is the time in hours and T is the temperature in Kelvin.

Creep-stress rupture data for high-temperature creep-resistant alloys are often plotted as log stress to rupture versus a combination of log time to rupture and temperature. One of the most common time–temperature parameters used to present this kind of data is the Larson-Miller (L.M.) parameter, which in generalized form is

P(L.M.) = T[logtr + C]

T = temperature, K or °Rtr = stress-rupture time, hC = constant usually of order 20

According to the L.M. parameter, at a given stress level the log time to stress rupture plus a constant of the order of 20 multiplied by the temperature in kelvins or degrees Rankine remains constant for a given material.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 80: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 80

Compton scattering

Compton scattering is a type of scattering that X-rays and gamma rays undergo in matter. The inelastic scattering of photons in matter results in a decrease in energy (increase in wavelength) of an X-ray or gamma ray photon, called the Compton effect. Part of the energy of the X/gamma ray is transferred to a scattering electron, which recoils and is ejected from its atom (which becomes ionized), and the rest of the energy is taken by the scattered, "degraded" photon.

Fig.: A photon of wavelength λ comes in from the left, collides with a target at rest, and a new photon of wavelength λ′ emerges at an angle θ.

X-Ray

Electromagnetic radiation of short wavelength produced when high-speed electrons strike a solid target

An X-ray is a form of electromagnetic radiation with a wavelength approximately in the range of 5 pm to 10 nanometers (corresponding to frequencies in the range 30 PHz to 60 EHz). X-rays are primarily used for diagnostic medical imaging and crystallography. X-rays are a form of ionizing radiation and as such can be dangerous.X-rays with a wavelength approximately longer than 0.1 nm are called soft X-rays. At wavelengths shorter than this, they are called hard X-rays. Hard X-rays overlap the range of long-wavelength (low energy) gamma rays, however the distinction between the two terms depends on the source of the radiation, not its wavelength: X-ray photons are generated by energetic electron processes, gamma rays by transitions within atomic nuclei.

Electromagnetic radiation of extremely short wavelength (100 nanometres to 0.001 nanometre) produced by the deceleration of charged particles or the transitions of

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 81: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 81

electrons in atoms. X-rays travel at the speed of light and exhibit phenomena associated with waves, but experiments indicate that they can also behave like particles (see wave-particle duality). On the electromagnetic spectrum, they lie between gamma rays and ultraviolet radiation. They were discovered in 1895 by Wilhelm Conrad Röntgen, who named them X-rays for their unknown nature. They are used in medicine to diagnose bone fractures, dental cavities, and cancer; to locate foreign objects in the body; and to stop the spread of malignant tumours. In industry, they are used to analyze and detect flaws in structures.

X-ray generation

X-ray generation can occur whenever charged particles pass within certain distances of each other without being in fixed orbits, the accelerations (or decelerations) may give off X-radiation, or X-rays. X-radiation can be produced by a variety of phenomena. For example, when high-energy X-rays, gamma rays, electrons, or protons bombard materials, the excited atoms within emit characteristic "secondary" (or fluorescent) X-rays.

X-ray photons are produced by an electron beam that is accelerated to a very high speed and strikes a target. The electrons that make up the beam are emitted from a heated cathode filament. The electrons are then focused and accelerated by an electrical field towards an angled anode target. The point where the electron beam strikes the target is called the focal spot. Most of the kinetic energy contained in the electron beam is converted to heat, but around 1% of the energy is converted into X-ray photons, the excess heat is dissipated via a heat sink. [1] At the focal spot, X-ray photons are emitted in all directions from the target surface, the highest intensity being around 60deg to 90deg from the beam due to the angle of the anode target to the approaching X-ray photons. There is a small round window in the X-ray tube directly above the angled target. This window allows the X-ray to exit the tube with little attenuation while maintaining a vacuum seal required for the X-ray tube operation.

X-ray machines work by applying controlled voltage and current to the X-ray tube, which results in a beam of X-rays. The beam is projected on matter. Some of the X-ray beam will pass through the object, while some are absorbed. The resulting pattern of the radiation is then ultimately detected by a detection medium including rare earth screens (which surround photographic film), semiconductor detectors, or X-ray image intensifiers.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 82: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 82

X-ray tube function

Spectrum of the X-rays emitted by an X-ray tube with a rhodium target, operated at 60 kV. The smooth, continuous curve is due to bremsstrahlung, and the spikes are characteristic K lines for rhodium atoms.

As with any vacuum tube, there is a cathode, which emits electrons into the vacuum and an anode to collect the electrons, thus establishing a flow of electrical current, known as the beam, through the tube. A high voltage power source, for example 30 to 150 kilovolts (kV), is connected across cathode and anode to accelerate the electrons. The X-ray spectrum depends on the anode material and the accelerating voltage.[1]

In many applications, the current flow (typically in the range 1mA to 1A) is able to be pulsed on for between about 1ms to 1s. This enables consistent doses of x-rays, and taking snapshots of motion. Until the late 1980s, X-ray generators were merely high-voltage, AC to DC variable power supplies. In the late 1980s a different method of control was emerging, called high speed switching. This followed the electronics technology of switching power supplies (aka switch mode power supply), and allowed for more accurate control of the X-ray unit, higher quality results, and reduced X-ray exposures.

Electrons from the cathode collide with the anode material, usually tungsten, molybdenum or copper, and accelerate other electrons, ions and nuclei within the anode material. About 1% of the energy generated is emitted/radiated, usually perpendicular to the path of the electron beam, as X-rays. The rest of the energy is released as heat. Over time, tungsten will be deposited from the target onto the interior surface of the tube,

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 83: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 83

including the glass surface. This will slowly darken the tube and was thought to degrade the quality of the X-ray beam, but research has suggested there is no effect.[2] Eventually, the tungsten deposit may become sufficiently conductive that at high enough voltages, arcing occurs. The arc will jump from the cathode to the tungsten deposit, and then to the anode. This arcing causes an effect called "crazing" on the interior glass of the X-ray window. As time goes on, the tube becomes unstable even at lower voltages, and must be replaced. At this point, the tube assembly (also called the "tube head") is removed from the X-ray system, and replaced with a new tube assembly. The old tube assembly is shipped to a company that reloads it with a new X-ray tube.

The X-Ray photon-generating effect is generally called the Bremsstrahlung effect, a contraction of the German brems for braking, and strahlung for radiation.

The range of photonic energies emitted by the system can be adjusted by changing the applied voltage, and installing aluminum filters of varying thicknesses. Aluminum filters are installed in the path of the X-ray beam to remove "soft" (non-penetrating) radiation. The number of emitted X-ray photons, or dose, are adjusted by controlling the current flow and exposure time.

Simply put, the high voltage controls X-ray penetration, and thus the contrast of the image. The tube current and exposure time affect the dose and therefore the darkness of the image.

Field electron emission for X-ray generation

Oxford Instruments have worked together with NASA on carbon nanotube (CNT)-based X-ray sources that employ field emission as the electron source, rather than thermionic emission, which has much lower power efficiency.[13] The CNT field emission cathodes have onset voltages about an order of magnitude lower than devices that rely on diamond or diamond-like carbon films.[13]

Field emission (FE) (also known as electron field emission) is an emission of electrons induced by external electromagnetic fields. Field emission can happen from solid and liquid surfaces, or individual atoms into vacuum or open air, or result in promotion of electrons from the valence to conduction band of semiconductors. The terminology is historical because related phenomena of surface photoeffect, thermionic emission or Richardson-Dushman effect and "cold electronic emission", i.e. the emission of electrons in strong static (or quasi-static) electric fields, were discovered and studied independently from 1880s to 1930s. When field emission is used without qualifiers it typically means the "cold emission."

Field emission in pure metals occurs in high electric fields: the gradients are typically higher than 1000 volts per micron and strongly dependent upon the work function. Electron sources based on field emission have a number of applications, but it is most

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 84: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 84

commonly an undesirable primary source of vacuum breakdown and electrical discharge phenomena, which engineers work to prevent. Examples of applications for surface field emission include construction of bright electron sources for high-resolution electron microscopes or to discharge spacecraft from induced charges. Devices which eliminate induced charges are termed charge-neutralizers.

Field emission was explained by quantum tunneling of electrons in the late 1920s. This was one of the triumphs of the nascent quantum mechanics. The theory of field emission from bulk metals was proposed by Fowler and Nordheim.[1] A family of approximate equations, "Fowler–Nordheim equations", is named after them. Strictly, Fowler-Nordheim equations apply only to field emission from bulk metals and (with suitable modification) to other bulk crystalline solids, but they are often used – as a rough approximation – to describe field emission from other materials.

In some respects, field electron emission is a paradigm example of what physicists mean by tunneling. Unfortunately, it is also a paradigm example of the intense mathematical difficulties that can arise. Simple solvable models of the tunneling barrier lead to equations (including the original 1928 Fowler-Nordheim-type equation) that get predictions of emission current density too low by a factor of 100 or more. If one inserts a more realistic barrier model into the simplest form of the Schrödinger equation, then an awkward mathematical problem arises over the resulting differential equation: it is known to be mathematically impossible in principle to solve this equation exactly in terms of the usual functions of mathematical physics, or in any simple way. To get even an approximate solution, it is necessary to use special approximate methods known in physics as "semi-classical" or "quasi-classical" methods. Worse, a mathematical error was made in the original application of these methods to field emission, and even the corrected theory that was put in place in the 1950s has been formally incomplete until very recently.[citation needed] A consequence of these (and other) difficulties has been a heritage of misunderstanding and disinformation that still persists in some current field emission research literature. This article tries to present a basic account of field emission "for the 21st century and beyond" that is free from these confusions.

Synchrotron X-ray generation

Synchrotron radiation is electromagnetic radiation, similar to cyclotron radiation, but generated by the acceleration of ultrarelativistic (i.e., moving near the speed of light) charged particles through magnetic fields. This may be achieved artificially in synchrotrons or storage rings, or naturally by fast electrons moving through magnetic fields in space. The radiation produced may range over the entire electromagnetic spectrum, from radio waves to infrared light, visible light, ultraviolet light, X-rays, and gamma rays. It is distinguished by its characteristic polarization and spectrum.

The radiation is projected at a tangent to the electron storage ring and captured by beamlines. These beamlines may originate at bending magnets, which mark the corners of the storage ring; or insertion devices, which are located in the straight sections of the storage ring. The spectrum and energy of X-rays differ between the two types. The

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 85: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 85

beamline includes X-ray optical devices which control the bandwidth, photon flux, beam dimensions, focus, and collimation of the rays.

First synchrotron function, F(x), x is usually a ratio of the frequency over a critical frequency. The maximum is at 1/3 of the critical frequency, where the bulk of emission takes place.

For X-ray generation or at lower frequencies, x is usually a ratio of frequencies, that is, the frequency over a critical frequency (critical frequency is the frequency at which most of the synchrotron radiation is radiated). This is needed when calculating the spectra for different types of synchrotron emission. It takes a spectrum of electrons (or any charged particle) generated by a separate process (such as a power law distribution of electrons and positrons from a constant injection spectrum) and converts this to the spectrum of photons generated by the input electrons/positrons.

The effective temperature, or black body temperature, of the Sun (5777 K) is the temperature a black body of the same size must have to yield the same total emissive power.

A spectral distribution comparison between blackbody radiation and synchrotron radiation for the same critical wavelength indicates that the synchrotron spectrum has a steeper dropoff at higher frequencies (shorter wavelengths, higher energy) and a shallower dropoff with increasing wavelength (lower frequencies, lower energy).[14]

Electromagnetic radiation is produced in a continuum whenever a charged particle loses energy by passing through a magnetic field or passes another charged particle, as by an electron passing a proton. When the electron assumes an energy level around the proton or changes energy levels the spectrum becomes blackbody radiation (Planckian distribution).

Bragg’s LawCompiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,

St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 86: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 86

Definition:

When a beam of X-rays (wavelength λ) strikes a crystal surface in which the layers of atoms or ions are separated by a distance d, the maximum intensity of the reflected ray occurs when sinθ = nλ/2d, where θ (known as the Bragg angle) is the complement of the angle of incidence and n is an integer. The law enables the structure of many crystals to be determined. It was discovered in 1912 by Sir William Lawrence Bragg.

Significance of bragg’s law:

Bragg's law was used to explain the interference pattern of X-rays scattered by crystals, diffraction has been developed to study the structure of all states of matter with any beam, e.g., ions, electrons, neutrons, and protons, with a wavelength similar to the distance between the atomic or molecular structures of interest.

The condition for maximum intensity contained in Bragg's law above allow us to calculate details about the crystal structure.

Derivation of bragg’s law:

Consider figure 5, the maximum intensity of diffraction is achieved when both waves, reflected by the electrons in the atom, are in phase. The rays of the incident beam are always in phase and parallel up to the point at which the top beam strikes the top layer at point z4.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 87: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 87

Figure 5 - Deriving Bragg's Law using the reflection geometry and applying trigonometry. The lower beam must travel the extra distance (AB + BC) to continue travelling parallel and adjacent to the top beam4.

The second beam continues to the next layer where it is scattered by atom B. The second beam must travel the extra distance AB + BC if the two beams are to continue travelling adjacent and parallel. This extra distance must be an integral (n) multiple of the wavelength () for the phases of the two beams to be the same:

n= AB +BC (2)

Recognising d as the hypotenuse of the right triangle Abz, we can use trigonometry to relate d and to the distance (AB + BC). The distance AB is opposite so,

AB = d sin

Because AB = BC equation 2 becomes,

n = 2AB (4)

Substituting equation 3 in equation 4 we have,

n = 2d sin (1)

and Bragg's Law has been derived. The location of the surface does not change the derivation of Bragg's Law4.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 88: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 88

Diffraction

In X-ray diffraction a beam strikes a crystal and diffracts into many specific directions. The angles and intensities of the diffracted beams indicate a three-dimensional density of electrons within the crystal. X-rays produce a diffraction pattern because their wavelength is typically the same order of magnitude (0.1-10.0 nm) as the spacing between the atomic planes in the crystal.

Fig. :Symmetrically spaced atoms cause re-radiated X-rays to reinforce each other in the specific directions where their path-length difference, 2d sin θ, equals an integer multiple of the wavelength λ

Each atom, re-radiates a small portion of an incoming beam's intensity as a spherical wave. If the atoms are arranged symmetrically (as is found in a crystal) with a separation d, these spherical waves will be in synch (add constructively) only in directions where their path-length difference 2d sin θ is equal to an integer multiple of the wavelength λ. The incoming beam therefore appears to have been deflected by an angle 2θ, producing a reflection spot in the diffraction pattern.

X-ray diffraction is a form of elastic scattering; the outgoing X-rays have the same energy, and thus same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to an inner-shell electron exciting it to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such electron excitation, but not in determining the distribution of atoms within the crystal.

Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 89: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 89

photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle-antiparticle pairs.

Similar diffraction patterns can be produced by scattering electrons or neutrons. X-rays are usually not diffracted from atomic nuclei.

Diffraction can occur whenever Bragg's law is satisfied. With monochromatic radiation, an arbitrary setting of a single crystal in an x-ray beam will not generally produce any diffracted beams. There would therefore be very little information in a single crystal diffraction pattern from using monochromatic radiation.

This problem can be overcome by continuously varying or   over a range of values, to satisfy Bragg's law. Practically this is done by:

• using a range of x-ray wavelengths (i.e. white radiation), or

• by rotating the crystal or, using a powder or polycrystalline specimen.

By selecting combinations of x-ray ranges and specimen types, discover the different techniques used in x-ray diffraction.

http://www.matter.org.uk/diffraction/x-ray/x_ray_methods.htm

Single-crystal X-ray diffraction

The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated.[89] Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an Ångström and to within a few tenths of a degree, respectively. The atoms in a crystal are not static, but oscillate about their mean positions, usually by less than a few tenths of an Ångström. X-ray crystallography allows measuring the size of these oscillations.

Procedure

The technique of single-crystal X-ray crystallography has three basic steps. The first — and often most difficult — step is to obtain an adequate crystal of the material under

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 90: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 90

study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.

In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.

In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement — now called a crystal structure — is usually stored in a public database.

Limitations

As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-ray crystallography—"small-molecule" and "macromolecular" crystallography—are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses with hundreds of thousands of atoms.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 91: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 91

Fig. : Workflow for solving the structure of a molecule by X-ray crystallography.

Rotating Crystal method:

In the rotating crystal method, a single crystal is mounted with an axis normal to a monochromatic x-ray beam. A cylindrical film is placed around it and the crystal is rotated about the chosen axis. 

As the crystal rotates, sets of lattice planes will at some point make the correct Bragg angle for  the monochromatic incident beam, and at that point a diffracted beam will be formed.  

The reflected beams are located on the surface of imaginary cones. When the film is laid out flat, the diffraction spots lie on horizontal lines.

Explore the rotating crystal method by clicking on the start button repeatedly.

The chief use of the rotating crystal method is in the determination of unknown crystal structures.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 92: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 92

http://www.matter.org.uk/diffraction/x-ray/rotating_crystal_method.htm

Laue method:

The Laue method is mainly used to determine the orientation of large single crystals. White radiation is reflected from, or transmitted through, a fixed crystal.

The diffracted beams form arrays of spots, that lie on curves on the film. The Bragg angle is fixed for every set of planes in the crystal. Each set of planes picks out and diffracts the particular wavelength from the white radiation that satisfies the Bragg law for the values of d and involved. Each curve therefore corresponds to a different wavelength. The spots lying on any one curve are reflections from planes belonging to one zone. Laue reflections from planes of the same zone all lie on the surface of an imaginary cone whose axis is the zone axis.

Experimental

There are two practical variants of the Laue method, the back-reflection and the transmission Laue method. You can study these below:

Back-reflection Laue

In the back-reflection method, the film is placed between the x-ray source and the crystal. The beams which are diffracted in a backward direction are recorded.

One side of the cone of Laue reflections is defined by the transmitted beam. The film intersects the cone, with the diffraction spots generally lying on an hyperbola.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 93: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 93

Transmission Laue

In the transmission Laue method, the film is placed behind the crystal to record beams which are transmitted through the crystal.

One side of the cone of Laue reflections is defined by the transmitted beam. The film intersects the cone, with the diffraction spots generally lying on an ellipse.

Crystal orientation is determined from the position of the spots. Each spot can be indexed, i.e. attributed to a particular plane, using special charts. The Greninger chart is used for back-reflection patterns and the Leonhardt chart for transmission patterns.

The Laue technique can also be used to assess crystal perfection from the size and shape of the spots. If the crystal has been bent or twisted in anyway, the spots become distorted and smeared out.

Powder method:

The powder method is used to determine the value of the lattice parameters accurately. Lattice parameters are the magnitudes of the unit vectors a, b and c which define the unit cell for the crystal.

If a monochromatic x-ray beam is directed at a single crystal, then only one or two diffracted beams may result.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 94: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 94

If the sample consists of some tens of randomly orientated single crystals, the diffracted beams are seen to lie on the surface of several cones. The cones may emerge in all directions, forwards and backwards.  

A sample of some hundreds of crystals (i.e. a powdered sample) show that the diffracted beams form continuous cones. A circle of film is used to record the diffraction pattern as shown. Each cone intersects the film giving diffraction lines. The lines are seen as arcs on the film.

For every set of crystal planes, by chance, one or more crystals will be in the correct orientation to give the correct Bragg angle to satisfy Bragg's equation. Every crystal plane is thus capable of diffraction. Each diffraction line is made up of a large number of small spots, each from a separate crystal. Each spot is so small as to give the appearance of a continuous line. If the crystal is not ground finely enough, the diffraction lines appear speckled.

This arrangement is achieved practically in the Debye Scherrer camera illustrated here...

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 95: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 95

Debye Scherrer camera:

A very small amount of powdered material is sealed into a fine capillary tube made from glass that does not diffract x-rays. The specimen is placed in the Debye Scherrer camera and is accurately aligned to be in the centre of the camera. X-rays enter the camera through a collimator.

The powder diffracts the x-rays in accordance with Braggs law to produce cones of diffracted beams. These cones intersect a strip of photographic film located in the cylindrical camera to produce a characteristic set of arcs on the film.

Investigate the powder diffraction method using the simulation here, by clicking on the start button repeatedly.

http://www.matter.org.uk/diffraction/x-ray/experimental_powder_diffraction.htm

Figure 6 – A photograph of a Debye-Scherrer camera with the front cover removed. The specimen is placed on the sample turntable. The knobs on the top are for positioning and fixing the film.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 96: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 96

Powder diffraction film:When the film is removed from the camera, flattened and processed, it shows the diffraction lines and the holes for the incident and transmitted beams.

There are always two arcs in the x-ray beams Ka and Kb, this causes the highest angle back-reflected arcs to be doubled. From noting this, it is always clear which hole is for the transmitted beam and which is for the incident beam in the film.

Indexing a powder pattern:

We shall now consider the powder patterns from a sample crystal. The sample is known to have a cubic structure, but we don't know which one.

We remove the film strip from the Debye camera after exposure, then develop and fix it. From the strip of film we make measurements of the position of each diffraction line. From the results it is possible to associate the sample with a particular type of cubic structure and also to determine a value for its lattice parameter.

• When the film is laid flat, S1 can be measured. This is the distance along the film, from a diffraction line, to the centre of the hole for the transmitted direct beam.

• For back reflections, i.e. where 2 > 90° you can measure S2 as the distance from the beam entry point.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 97: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 97

• The distance S1 corresponds to a diffraction angle of 2. The angle between the diffracted and the transmitted beams is always 2.   We know that the distance between the holes in the film, W, corresponds to a diffraction angle of = . So we can find from:

  or  

• We know Bragg's Law: n = 2dsin  

and the equation for interplanar spacing, d, for cubic crystals is given by: 

  where a is the lattice parameter

this gives:

• From the measurements of each arc we can now generate a table of S1, and sin2.

• If all the diffraction lines are considered, then the experimental values of sin2 should form a pattern related to the values of h, k and l for the structure.

• We now multiply the values of sin2 by some constant value to give nearly integer values for all the h2+ k2+ l2 values. Integer values are then assigned.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 98: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 98

• The integer values of h2+ k2+ l2 are then equated with their hkl values to index each arc, using the table shown below:

• For some structures e.g. bcc, fcc, not all planes reflect, so some of the arcs may be missing.

• It is then possible to identify certain structures, in this case fcc (- the planes have hkl values: all even, or all odd in the table above).

• For each line we can also calculate a value for a, the lattice parameter. For greater accuracy the value is averaged over all the lines.

Powder method of XRD : APPLICATIONS

X-ray powder diffraction has opened up new avenues in the studies of structures. It has a number of applications.

Qualitative analysis: From the pattern, the d-spacings are recorded and the relative intensities of the 10 strongest lines are measured and are compared with the patterns of the known compounds. This comparison is done with the help of a powder diffraction file that contains the patterns of some standard compounds divided into subdivisions-Minerals, Inorganic, Organic.

Quantitative Analysis: For a two-component mixture the relative concentration of each of the components can be obtained by measuring the relative intensities of the strong non overlapping lines each belonging to the two components.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 99: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 99

Structure of alloys: An alloy is a mixture of two or more elements. If the composition is uniform it produces a typical powder diffraction pattern. If one of the components precipitates, it produces separate lines on the powder pattern corresponding to the component.

Stress Determination in metals: If there is a stress in a metal then the angle of the diffraction cone changes because of a change in the d-spacing due to stress. By measuring the changes in the cone angle accurate measurements of stress can be made. In addition, stress invariably broadens diffraction peaks unless it is absolutely uniform on an atomic scale.

Determination of particle size: As the size of the crystallite decreases, the angular spread of the reflection increases. The half height width can be used a measure of the mean particle size of the sample.

Identification and raw material evaluation: For some complex materials, it is difficult to analyze the pattern. But since similar materials exhibit similar patterns, we can for example determine the structure of different clays as a cement material by comparing with acceptable clay and thus relate structure to properties.

Powder method of XRD :LIMITATIONS  

Single crystal methods depend upon mathematical algorithms and accurate peak intensities to solve structures. Such accuracy is sometimes difficult to obtain.

The individual peak intensities are difficult to obtain because in powder diffraction, a 3-D pattern is reduced to a 1-D pattern and analysis is done. This leads to both accidental and exact peak overlap.

The symmetry of crystals cannot be obtained accurately in powder diffraction pattern.

Preferred orientation can lead to inaccurate peak intensities. But both rotating the sample about its normal and rocking it about each data point can overcome this.

X-ray diffraction techniques-specific applications

X-ray diffraction yields the atomic structure of materials and is based on the elastic scattering of X-rays from the electron clouds of the individual atoms in the system. The most comprehensive description of scattering from crystals is given by the dynamical theory of diffraction.[1]

Single-crystal X-ray diffraction is a technique used to solve the complete structure of crystalline materials, ranging from simple inorganic solids to complex macromolecules, such as proteins.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 100: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 100

Powder diffraction (XRD) is a technique used to characterise the crystallographic structure, crystallite size (grain size), and preferred orientation in polycrystalline or powdered solid samples. Powder diffraction is commonly used to identify unknown substances, by comparing diffraction data against a database maintained by the International Centre for Diffraction Data. It may also be used to characterize heterogeneous solid mixtures to determine relative abundance of crystalline compounds and, when coupled with lattice refinement techniques, such as Rietveld refinement, can provide structural information on unknown materials. Powder diffraction is also a common method for determining strains in crystalline materials. An effect of the finite crystallite sizes is seen as a broadening of the peaks in an X-ray diffraction as is explained by the Scherrer Equation.

Thin film diffraction and grazing incidence X-ray diffraction may be used to characterize the crystallographic structure and preferred orientation of substrate-anchored thin films.

High-resolution X-ray diffraction is used to characterize thickness, crystallographic structure, and strain in thin epitaxial films. It employs parallel-beam optics.

X-ray pole figure analysis enables one to analyze and determine the distribution of crystalline orientations within a crystalline thin-film sample.

X-ray rocking curve analysis is used to quantify grain size and mosaic spread in crystalline materials.

Geiger counter

A Geiger counter, also called a Geiger-Müller counter, is a type of particle detector that measures ionizing radiation. They are notable for being used to detect if objects emit nuclear radiation

Description

Geiger counters are used to detect ionizing radiation (usually beta particles and gamma rays, but certain models can detect alpha particles). An inert gas-filled tube (usually helium, neon or argon with halogens added) briefly conducts electricity when a particle or photon of radiation makes the gas conductive. The tube amplifies this conduction by a cascade effect and outputs a current pulse, which is then often displayed by a needle or lamp and/or audible clicks. Modern instruments can report radioactivity over several orders of magnitude. Some Geiger counters can be used to detect gamma radiation, though sensitivity can be lower for high energy gamma radiation than with certain other types of detector, because the density of the gas in the device is usually low, allowing most high energy gamma photons to pass through undetected (lower energy photons are easier to detect, and are better absorbed by the detector. Examples of this are the X-ray Pancake Geiger Tube). A better device for detecting gamma rays is a sodium iodide scintillation counter. Good alpha and beta scintillation counters also exist, but Geiger detectors are still favored as general purpose alpha/beta/gamma portable contamination

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 101: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 101

and dose rate instruments, due to their low cost and robustness. A variation of the Geiger tube is used to measure neutrons, where the gas used is boron trifluoride and a plastic moderator is used to slow the neutrons. This creates an alpha particle inside the detector and thus neutrons can be counted.

Types and applications

A modern digital Geiger counter is used with applications ranging from nuclear medicine, mining, contamination monitoring, and national security.

A Geiger counter and metal detector combined for detecting both metal and radioactive materials for security purpose.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 102: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 102

The configuration of GM tubes determines the types of radiation that it can detect. For example, a thin mica window on a GM Tube (shown here) will allow for the detection of alpha radiation, whereas GM Tubes without a thin mica window are too thick for the alpha and low energy beta radiation to pass through and be detected.

The Geiger–Müller tube is one form of a class of radiation detectors called gaseous detectors or simply gas detectors. Although useful, cheap and robust, a counter using a GM tube can only detect the presence and intensity of radiation (particle frequency, as opposed to energy). Gas detectors with the ability to both detect radiation and determine particle energy levels (due to their construction, test gas, and associated electronics) are called proportional counters. Some proportional counters can detect the position and or angle of the incident radiation as well. Other devices detecting radiation include:

ionization chambers dosimeters photomultipliers semiconductor detectors and variants including CCDs microchannel plates scintillation counters solid-state track detectors cloud chambers bubble chambers spark chambers neutron detectors microcalorimeters

The Geiger-Müller counter has applications in the fields of nuclear physics, geophysics (mining), and medical therapy with isotopes and x-rays. Some of the proportional counters have many electrodes and are called multi-wire proportional counters or simply MWPCs. Radiation detectors have also been used extensively in nuclear physics, medicine, particle physics, astronomy, and in industry.

Characteristics of counters

  Geiger Proportional Gas Flow Scintillation

Window Mica Mica Mylar/Al Be/Al

Thickness 3 mg/cm2 2.5 mg/cm2

6 m0.2 mm

Position Radial Axial Axial Radial

Filling Ar/Br Xe/CH4 Ar/CH4 --

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 103: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 103

Pre-amplifier unnecessary

Auto-amplification 109 106 106 106

Useful range (Å) 0.5-4 0.5-4 0.7-10* 0.1-3

Dead time (Micro seconds)

200 0.5 0.5 0.2

Max. useful count rate2 103 5 l04 5 l04

106

Cosmic background (c/s) 0.8 0.4 0.2 10

Resolution % (Fe ) --   14 15

Quantum counting efficiency

dependent dependent dependent reasonably independent of

*Can be extended to 50 Å  by use of ultra-thin windows.

Diffraction topography

Diffraction topography (short: "topography") is an X-ray imaging technique based on Bragg diffraction. Diffraction topographic images ("topographs") record the intensity profile of a beam of X-rays (or, sometimes, neutrons) diffracted by a crystal. A topograph thus represents a two-dimensional spatial intensity mapping of reflected X-rays, i.e. the spatial fine structure of a Bragg spot. This intensity mapping reflects the distribution of scattering power inside the crystal; topographs therefore reveal the irregularities in a non-ideal crystal lattice. X-ray diffraction topography is one variant of X-ray imaging, making use of diffraction contrast rather than absorption contrast which is usually used in radiography and computed tomography (CT).

Topography is used for monitoring crystal quality and visualizing defects in many different crystalline materials. It has proved helpful e.g. when developing new crystal growth methods, for monitoring growth and the crystal quality achieved, and for iteratively optimizing growth conditions. In many cases, topography can be applied without preparing or otherwise damaging the sample; it is therefore one variant of non-destructive testing.

Basic principle of topography

The basic working principle of diffraction topography is as follows: An incident, spatially extended beam (mostly of X-rays, or neutrons) impinges on a sample. The beam may be either monochromatic, i.e. consist one single wavelength of X-rays or neutrons, or polychromatic, i.e. be composed of a mixture of wavelengths ("white beam" topography). Furthermore, the incident beam may be either parallel, consisting only of "rays"

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 104: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 104

propagating all along nearly the same direction, or divergent/convergent, containing several more strongly different directions of propagation.

When the beam hits the crystalline sample, Bragg diffraction occurs, i.e. the incident wave is reflected by the atoms on certain lattice planes of the sample, on condition that it hits those planes at the right Bragg angle. Diffraction from sample can take place either in reflection geometry (Bragg case), with the beam entering and leaving through the same surface, or in transmission geometry (Laue case). Diffraction gives rise to a diffracted beam, which will leave the sample and propagate along a direction differing from the

incident direction by the scattering angle .

The cross section of the diffracted beam may or may not be identical to the one of the incident beam. In the case of strongly asymmetric reflections, the beam size (in the diffraction plane) is considerably expanded or compressed, with expansion occurring if the incidence angle is much smaller than the exit angle, and vice-versa. Independently of this beam expansion, the relation of sample size to image size is given by the exit angle alone: The apparent lateral size of sample features parallel to the exit surface is downscaled in the image by the projection effect of the exit angle.

A homogeneous sample (with a regular crystal lattice) would yield a homogeneous intensity distribution in the topograph (a "flat" image). Intensity modulations (topographic contrast) arise from irregularities in the crystal lattice, originating from various kinds of defects such as

voids and inclusions in the crystal

phase boundaries (regions of different crystallographic phase, polytype, ...)

defective areas, non-crystalline (amorphous) areas / inclusions

cracks, surface scratches

stacking faults

dislocations, dislocation bundles

grain boundaries, domain walls

growth striations

point defects or defect clusters

crystal deformation

strain fields

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 105: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 105

In many cases of defects such as dislocations, topography is not directly sensitive to the defects themselves (atomic structure of the dislocation core), but predominantly to the strain field surrounding the defect region.

XRD

1. X-ray Diffraction

 

X-ray powder diffraction (XRD) is one of the most dominant techniques for qualitative and quantitative analysis of crystalline compounds. The technique supplies information that cannot be obtained any other method. The information obtained includes nature of crystalline phase’s present, structural make-up of phases, degree of crystallinity, and amount of amorphous content, micro-strain size and orientation of crystallites.

1. Principles of Operation

 

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 106: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 106

      Once a solid sample is irradiated by a parallel beam of monochromatic X-rays, the atomic lattice of the sample operates as a three dimensional diffraction stridents causing the X-ray beam to be diffracted to particular angles. The diffraction pattern, that includes angles (positions) and intensities of the diffracted beam, presents several practical clues about the sample which are discussed below:

      Angles are utilized to compute the inter-planar atomic spacing (d-spacing). Because every crystalline material will give a distinctive diffraction pattern and can perform as a unique ‘fingerprint’, the position (d) and intensity (I) information are used to identify the nature of material by comparing them with patterns for over 80,000 data entries in the International Powder Diffraction File (PDF) database, complied by the Joint Committee for Powder Diffraction Standards (JCPDS). Through this method, identification of any crystalline compounds, even in a complex sample, can be determined.

The position (d) of diffracted peaks also supplies information about how the atoms are coordinated within the crystalline compound, which represents unit cell size or lattice parameter. The intensity information is used to evaluate the type and nature of atoms. Determination of lattice parameter assists understanding extent of solid solution including absolute or partial substitution of one element for another, as in some alloys in a sample. Width of the diffracted peaks is used to establish crystallite size and micro-strain in the sample. The‘d’ and ‘I’ from a phase can also be utilized to quantitatively approximate the amount of that phase in a multi-component mixture.

1. Quantitative analysis

 

As stated earlier, XRD can be utilized not only for qualitative identification but also for quantitative approximation of diverse crystalline phases. This is one of the significant advantages of X-ray diffraction techniques. Numerous methods have been proposed and successfully applied to quantify crystalline phases in mixtures. They include external standard methods, the reference-intensity-ratio (RIR) method, chemical methods and the full-pattern fitting Rietveld method. Upon accessible methods, the Rietveld method is probably the most precise and reliable method. The Rietveld method is a full-pattern fitting least squares refinement technique and has been successfully applied for quantification and characterization of inorganic and organic compounds. It has also been used for crystal structure refinement, to determine size and strain of crystallites.

Internal Standard  

The Internal Standard Method is the procedure of choice for analyzing powdered systems. A known quantity of a reference powder is added to the unknown. Any number of constituents in a mixture may be quantified independently. The mixture may contain amorphous (noncrystalline) components. The mass absorption coefficient of the mixture need not be known

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 107: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 107

in advance. The Internal Standard Method is applied broadly to any mineral or materials systems for which the chemistry is unknown.

External Standard  

The External Standard Method may be applied to solid systems, such as alloys, plasma sprayed coatings, or oxide layers. This method allows the quantification of one or more components in a system that may contain an amorphous fraction. The mass absorption coefficient must, however, be known in advance, requiring either full elemental chemistry or prior knowledge of the chemistry, as in the case of plasma spray coatings or alloys. 

Direct Comparison

This procedure requires no standards, but is only applicable to fully crystalline mixtures. All of the components in the system must be quantified for solution. The Direct Comparison Method is the common method used for retained austenite determination.

Unit V

Spectroscopy was originally the study of the interaction between radiation and matter as a function of wavelength (λ). Historically, spectroscopy referred to the use of visible light dispersed according to its wavelength, e.g. by a prism. Later the concept was expanded greatly to comprise any measurement of a quantity as a function of either wavelength or frequency. Thus, it also can refer to a response to an alternating field or varying frequency (ν). A further extension of the scope of the definition added energy (E) as a variable, once the very close relationship E = hν for photons was realized (h is the Planck constant). A plot of the response as a function of wavelength—or more commonly frequency—is referred to as a spectrum; see also spectral linewidth.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 108: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 108

Spectrometry is the spectroscopic technique used to assess the concentration or amount of a given chemical (atomic, molecular, or ionic) species. In this case, the instrument that performs such measurements is a spectrometer, spectrophotometer, or spectrograph.

Spectroscopy/spectrometry is often used in physical and analytical chemistry for the identification of substances through the spectrum emitted from or absorbed by them.

Spectroscopy/spectrometry is also heavily used in astronomy and remote sensing. Most large telescopes have spectrographs, which are used either to measure the chemical composition and physical properties of astronomical objects or to measure their velocities from the Doppler shift of their spectral lines

Fig.: Animation of the dispersion of light as it travels through a triangular prism

Electron spectroscopy is an analytical technique to study the electronic structure and its dynamics in atoms and molecules. In general an excitation source such as x-rays, electrons or synchrotron radiation will eject an electron from an inner-shell orbital of an atom. Detecting photoelectrons that are ejected by x-rays is called x-ray photoelectron spectroscopy (XPS) or electron spectroscopy for chemical analysis (ESCA). Detecting electrons that are ejected from higher orbitals to conserve energy during electron transitions is called Auger electron spectroscopy (AES).

Experimental applications include high-resolution measurements on the intensity and angular distributions of emitted electrons as well as on the total and partial ion yields. Ejected electrons can escape only from a depth of approximately 3 nanometers or less, making electron spectroscopy most useful to study surfaces of solid materials. Depth profiling is accomplished by combining an electron spectroscopy with a sputtering source that removes surface layers.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 109: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 109

X-ray photoelectron spectroscopy

X-ray photoelectron spectroscopy (XPS) is a quantitative spectroscopic technique that measures the elemental composition, empirical formula, chemical state and electronic state of the elements that exist within a material. XPS spectra are obtained by irradiating a material with a beam of X-rays while simultaneously measuring the kinetic energy and number of electrons that escape from the top 1 to 10 nm of the material being analyzed. XPS requires ultra high vacuum (UHV) conditions.

XPS is a surface chemical analysis technique that can be used to analyze the surface chemistry of a material in its "as received" state, or after some treatment, for example: fracturing, cutting or scraping in air or UHV to expose the bulk chemistry, ion beam etching to clean off some of the surface contamination, exposure to heat to study the changes due to heating, exposure to reactive gases or solutions, exposure to ion beam implant, exposure to ultraviolet light.

XPS is also known as ESCA, an abbreviation for Electron Spectroscopy for Chemical Analysis.

XPS detects all elements with an atomic number (Z) of 3 (lithium) and above. It cannot detect hydrogen (Z = 1) or helium (Z = 2) because the diameter of these orbitals is so small, reducing the catch probability to almost zero.

Detection limits for most of the elements are in the parts per thousand range. Detections limits of parts per million (ppm) are possible, but require special conditions: concentration at top surface or very long collection time (overnight).

XPS is routinely used to analyze inorganic compounds, metal alloys, semiconductors, polymers, elements, catalysts, glasses, ceramics, paints, papers, inks, woods, plant parts, make-up, teeth, bones, medical implants, bio-materials, viscous oils, glues, ion modified materials and many others.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 110: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 110

Fig.: Basic components of a monochromatic XPS system.

Fig.: Wide-scan survey spectrum for all elements.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 111: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 111

Fig. : High-resolution spectrum for Si(2p) signal.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 112: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 112

Fig.: Rough schematic of XPS physics - "Photoelectric Effect.

XPS is used to measure:

elemental composition of the surface (top 1–10 nm usually) empirical formula of pure materials elements that contaminate a surface chemical or electronic state of each element in the surface uniformity of elemental composition across the top surface (or line profiling or

mapping) uniformity of elemental composition as a function of ion beam etching (or depth

profiling)

XPS can be performed using either a commercially built XPS system, a privately built XPS system or a synchrotron-based light source combined with a custom designed electron analyzer. Commercial XPS instruments in the year 2005 used either a highly focused 20 to 200 micrometer beam of monochromatic aluminium Kα X-rays or a broad 10–30 mm beam of non-monochromatic (polychromatic) magnesium X-rays. A few, special design, XPS instruments can analyze volatile liquids or gases, materials at low or high temperatures or materials at roughly 1 torr vacuum, but there are relatively few of these types of XPS systems.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 113: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 113

Because the energy of a particular X-ray wavelength is known, the electron binding energy of each of the emitted electrons can be determined by using an equation that is based on the work of Ernest Rutherford (1914):

where Ebinding is the binding energy (BE) of the electron, Ephoton is the energy of the X-ray photons being used, Ekinetic is the kinetic energy of the electron as measured by the instrument and φ is the work function of the spectrometer (not the material).

XPS is routinely used to determine:

What elements and the quantity of those elements that are present within the top 1-12 nm of the sample surface

What contamination, if any, exists in the surface or the bulk of the sample Empirical formula of a material that is free of excessive surface contamination The chemical state identification of one or more of the elements in the sample The binding energy of one or more electronic states The thickness of one or more thin layers (1–8 nm) of different materials within

the top 12 nm of the surface The density of electronic states

Auger electron spectroscopy

Auger electron spectroscopy (AES; pronounced [o ʒ e] in French) is a common analytical technique used specifically in the study of surfaces and, more generally, in the area of materials science. Underlying the spectroscopic technique is the Auger effect, as it has come to be called, which is based on the analysis of energetic electrons emitted from an excited atom after a series of internal relaxation events. The Auger effect was discovered independently by both Lise Meitner and Pierre Auger in the 1920s. Though the discovery was made by Meitner and initially reported in the journal Zeitschrift für Physik in 1922, Auger is credited with the discovery in most of the scientific community.[1] Until the early 1950s Auger transitions were considered nuisance effects by spectroscopists, not containing much relevant material information, but studied so as to explain anomalies in x-ray spectroscopy data. Since 1953 however, AES has become a practical and straightforward characterization technique for probing chemical and compositional surface environments and has found applications in metallurgy, gas-phase chemistry, and throughout the microelectronics industry.[2][3][4][5]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 114: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 114

Electron transitions and the Auger effect

The Auger effect is an electronic process at the heart of AES resulting from the inter- and intrastate transitions of electrons in an excited atom. When an atom is probed by an external mechanism, such as a photon or a beam of electrons with energies in the range of 2 keV to 50 keV, a core state electron can be removed leaving behind a hole. As this is an unstable state, the core hole can be filled by an outer shell electron, whereby the electron moving to the lower energy level loses an amount of energy equal to the difference in orbital energies. The transition energy can be coupled to a second outer shell electron which will be emitted from the atom if the transferred energy is greater than the orbital binding energy.[2][3][4][5][6][7] An emitted electron will have a kinetic energy of:

Ekin = ECore State − EB − EC'

where ECore State, EB, EC' are respectively the core level, first outer shell, and second outer shell electron energies, measured from the vacuum level. The apostrophe (tic) denotes a slight modification to the binding energy of the outer shell electrons due to the ionized nature of the atom; often however, this energy modification is ignored in order to ease calculations.[3] Since orbital energies are unique to an atom of a specific element, analysis of the ejected electrons can yield information about the chemical composition of a surface. Figure 1 illustrates two schematic views of the Auger process.

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 115: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 115

Figure 1. Two views of the Auger process. (a) illustrates sequentially the steps involved in Auger deexcitation. An incident electron creates a core hole in the 1s level. An electron from the 2s level fills in the 1s hole and the transition energy is imparted to a 2p electron which is emitted. The final atomic state thus has two holes, one in the 2s orbital and the other in the 2p orbital. (b) illustrates the same process using spectroscopic notation, KL1L2,3.

The types of state-to-state transitions available to electrons during an Auger event are dependent on several factors, ranging from initial excitation energy to relative interaction rates, yet are often dominated by a few characteristic transitions. Because of the interaction between an electron's spin and orbital angular momentum (spin-orbit coupling) and the concomitant energy level splitting for various shells in an atom, there are a variety of transition pathways for filling a core hole. Energy levels are labeled using a number of different schemes such as the j-j coupling method for heavy elements (Z ≥ 75), the Russell-Saunders L-S method for lighter elements (Z < 20), and a combination of both for intermediate elements.[3][8][9] The j-j coupling method, which is historically linked to X-ray notation, is almost always used to denote Auger transitions. Thus for a KL1L2,3

transition, K represents the core level hole, L1 the relaxing electron's initial state, and L2,3

the emitted electron's initial energy state. Figure 1(b) illustrates this transition with the corresponding spectroscopic notation. The energy level of the core hole will often determine which transition types will be favored. For single energy levels, i.e. K, transitions can occur from the L levels, giving rise to strong KLL type peaks in an Auger spectrum. Higher level transitions can also occur, but are less probable. For multi-level shells, transitions are available from higher energy orbitals (different n, ℓ quantum numbers) or energy levels within the same shell (same n, different ℓ number).[2] The result are transitions of the type LMM and KLL along with faster Coster–Kronig transitions such as LLM.[2][3] while Coster–Kronig transitions are faster, they are also less energetic and thus harder to locate on an Auger spectrum. As the atomic number Z increases, so too does the number of potential Auger transitions. Fortunately, the strongest electron-electron interactions are between levels which are close together, giving rise to characteristic peaks in an Auger spectrum. KLL and LMM peaks are some of the most commonly identified transitions during surface analysis.[3] Finally, valence band electrons can also fill core holes or be emitted during KVV-type transitions.

Several models, both phenomenological and analytical, have been developed to describe the energetics of Auger transitions. One of the most tractable descriptions, put forth by Jenkins and Chung, estimates the energy of Auger transition ABC as:

EABC = EA(Z) − 0.5[EB(Z) + EB(Z + 1)] − 0.5[EC(Z) + EC(Z + 1)]

Ei(Z) are the binding energies of the ith level in element of atomic number Z and Ei(Z + 1) are the energies of the same levels in the next element up in the periodic table. While useful in practice, a more rigorous model accounting for effects such as screening and relaxation probabilities between energy levels gives the Auger energy as:

EABC = EA − EB − EC − F(BC:x) + Rxin + Rxex

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 116: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 116

where F(BC:x) is the energy of interaction between the B and C level holes in a final atomic state x and the R's represent intra- and extra-atomic transition energies accounting for electronic screening.[3] Auger electron energies can be calculated based on measured values of the various Ei and compared to peaks in the secondary electron spectrum in order to identify chemical species. This technique has been used to compile several reference databases used for analysis in current AES setups.

Experimental setup and quantification

Instrumentation

Figure 2. AES experimental setup using a cylindrical mirror analyzer (CMA). An electron beam is focused onto a specimen and emitted electrons are deflected around the electron gun and pass through an aperture towards the back of the CMA. These electrons are then directed into an electron multiplier for analysis. Varying voltage at the sweep supply allows derivative mode plotting of the Auger data. An optional ion gun can be integrated for depth profiling experiments.

Surface sensitivity in AES arises from the fact that emitted electrons usually have energies ranging from 50 eV to 3 keV and at these values, electrons have a short mean free path in a solid. The escape depth of electrons is therefore localized to within a few nanometers of the target surface, giving AES an extreme sensitivity to surface species. [7]

Because of the low energy of Auger electrons, most AES setups are run under ultra-high vacuum (UHV) conditions. Such measures prevent electron scattering off of residual gas atoms as well as the formation of a thin "gas (adsorbate) layer" on the surface of the specimen which degrades analytical performance.[6][7] A typical AES setup is shown schematically in figure 2. In this configuration, focused electrons are incident on a sample and emitted electrons are deflected into a cylindrical mirror analyzer (CMA). In

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 117: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 117

the detection unit, Auger electrons are multiplied and the signal sent to data processing electronics. Collected Auger electrons are plotted as a function of energy against the broad secondary electron background spectrum.

Since the intensity of the Auger peaks may be small compared to the noise level of the background, AES is often run in a derivative mode which serves to highlight the peaks by modulating the electron collection current via a small applied AC voltage. Since this ΔV = ksin(ωt), the collection current becomes I(V + ksin(ωt)). Taylor expanding gives:

Using the setup in figure 2, detecting the signal at frequency ω will give a value for I' or

.[6][7] Plotting in derivative mode also emphasizes Auger fine structure which appear as small secondary peaks surrounding the primary Auger peak. These secondary peaks, not to be confused with high energy satellites which are discussed later, arise from the presence of the same element in multiple different chemical states on a surface (i.e. Adsorbate layers) or from relaxation transitions involving valence band electrons of the substrate. Figure 3 illustrates a derivative spectrum from a copper nitride film clearly showing the Auger peaks. The peak in derivative mode is not the true Auger peak, but rather the point of maximum slope of N(E), but this concern is usually ignored.[7]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 118: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 118

Figure 3. Auger spectrum of a copper nitride film in derivative mode plotted as a function of energy. Different peaks for Cu and N are apparent with the N KLL transition highlighted.

Quantitative analysis

Semi-quantitative compositional and element analysis of a sample using AES is dependent on measuring the yield of Auger electrons during a probing event. Electron yield, in turn, depends on several critical parameters such as electron-impact cross-section and fluorescence yield.[4][6] Since the Auger effect is not the only mechanism available for atomic relaxation, there is a competition between radiative and non-radiative decay processes to be the primary de-excitation pathway. The total transition rate, ω, is a sum of the non-radiative (Auger) and radiative (photon emission) processes. The Auger yield, ωA, is thus related to the fluorescence (x-ray) yield, ωX, by the relation,

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 119: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 119

Figure 4. Fluorescence and Auger electron yields as a function of atomic number for K shell vacancies. Auger transitions (red curve) are more probable for lighter elements, while X-ray yield (dotted blue curve) becomes dominant at higher atomic numbers. Similar plots can be obtained for L and M shell transitions. Coster– Kronig (i.e. intra-shell) transitions are ignored in this analysis.

where WX is the X-ray transition probability and WA is the Auger transition probability.[6]

Attempts to relate the fluorescence and Auger yields to atomic number have resulted in plots similar to figure 4. A clear transition from electron to photon emission is evident in this chart for increasing atomic number. For heavier elements, x-ray yield becomes greater than Auger yield, indicating an increased difficulty in measuring the Auger peaks for large Z-values. Conversely, AES is sensitive to the lighter elements, and unlike X-ray fluorescence, Auger peaks can be detected for elements as light as lithium (Z = 3). Lithium represents the lower limit for AES sensitivity since the Auger effect is a "three state" event necessitating at least three electrons. Neither H nor He can be detected with this technique. For K-level based transitions, Auger effects are dominant for Z < 15 while for L- and M-level transitions, AES data can be measured for Z ≤ 50.[6] The yield limits effectively prescribe a cutoff for AES sensitivity, but complex techniques can be utilized to identify heavier elements, such as uranium and americium, using the Auger effect.[1]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 120: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 120

Another critical quantity that determines yield of Auger electrons at a detector is the electron impact cross-section. Early approximations (in cm2) of the cross-section were based on the work of Worthington and Tomlin,

with b acting as a scaling factor between 0.25 and 0.35, and C a function of the primary electron beam energy, Ep. While this value of σax is calculated for an isolated atom, a simple modification can be made to account for matrix effects:

σ(E) = σax[1 + rm(Ep,α)]

where α is the angle to the surface normal of the incident electron beam; rm can be established empirically and encompasses electron interactions with the matrix such as ionization due to backscattered electrons. Thus the total yield can be written as:

Here Nx is the number of x atoms per volume, λ the electron escape depth, θ the analyzer angle, T the transmission of the analyzer, I(t) the electron excitation flux at depth t, dΩ the solid angle, and δt is the thickness of the layer being probed. Encompassed in these terms, especially the Auger yield which is related to the transition probability, is the quantum mechanical overlap of the initial and final state wave functions. Precise expressions for the transition probability, based on first-order perturbation Hamiltonians, can be found in Thompson and Baker.[4] Often, all of these terms are not known, so most analyses compare measured yields with external standards of known composition. Ratios of the acquired data to standards can eliminate common terms, especially experimental setup characteristics and material parameters, and can be used to determine element composition.[3][6][7] Comparison techniques work best for samples of homogeneous binary materials or uniform surface layers, while elemental identification is best obtained from comparison of pure samples.

Uses and limitations

There are a number of electron microscopes that have been specifically designed for use in Auger spectroscopy; these are termed scanning Auger microscopes (SAM) and can produce high resolution, spatially resolved chemical images.[1][3][5][7] SAM images are obtained by stepping a focused electron beam across a sample surface and measuring the intensity of the Auger peak above the background of scattered electrons. The intensity map is correlated to a gray scale on a monitor with whiter areas corresponding to higher element concentration. In addition, sputtering is sometimes used with Auger spectroscopy to perform depth profiling experiments. Sputtering removes thin outer layers of a surface so that AES can be used to determine the underlying composition. [3][4][5][6] Depth profiles

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 121: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 121

are shown as either Auger peak height vs. sputter time or atomic concentration vs. depth. Precise depth milling through sputtering has made profiling an invaluable technique for chemical analysis of nanostructured materials and thin films. AES is also used extensively as an evaluation tool on and off fab lines in the microelectronics industry, while the versatility and sensitivity of the Auger process makes it a standard analytical tool in research labs.[10][11][12][13]

Despite the advantages of high spatial resolution and precise chemical sensitivity attributed to AES, there are several factors that can limit the applicability of this technique, especially when evaluating solid specimens. One of the most common limitations encountered with Auger spectroscopy are charging effects in non-conducting samples.[2][3] Charging results when the number of secondary electrons leaving the sample is different to the number of incident electrons, giving rise to a net polarity at the surface. Both positive and negative surface charges severely alter the yield of electrons emitted from the sample and hence distort the measured Auger peaks. To complicate matters, neutralization methods employed in other surface analysis techniques, such as secondary ion mass spectrometry (SIMS), are not applicable to AES, as these methods usually involve surface bombardment with either electrons or ions (i.e. flood gun). Several processes have been developed to combat the issue of charging, though none of them are ideal and still make quantification of AES data difficult.[3][6] One such technique involves depositing conductive pads near the analysis area to minimize regional charging. However, this type of approach limits SAM applications as well as the amount of sample material available for probing. A related technique involves thinning or "dimpling" a non-conductive layer with Ar + ions and then mounting the sample to a conductive backing prior to AES.[14][15] This method has been debated, with claims that the thinning process leaves elemental artifacts on a surface and/or creates damaged layers that distort bonding and promote chemical mixing in the sample. As a result, the compositional AES data is considered suspect. The most common setup to minimize charging effects includes use of a glancing angle (~10°) electron beam and a carefully tuned bombarding energy (between 1.5 keV and 3 keV). Control of both the angle and energy can subtly alter the number of emitted electrons vis-à-vis the incident electrons and thereby reduce or altogether eliminate sample charging.[2][5][6]

In addition to charging effects, AES data can be obscured by the presence of characteristic energy losses in a sample and higher order atomic ionization events. Electrons ejected from a solid will generally undergo multiple scattering events and lose energy in the form of collective electron density oscillations called plasmons.[2][7] If plasmon losses have energies near that of an Auger peak, the less intense Auger process may become dwarfed by the plasmon peak. As Auger spectra are normally weak and spread over many eV of energy, they are difficult to extract from the background and in the presence of plasmon losses, deconvolution of the two peaks becomes extremely difficult. For such spectra, additional analysis through chemical sensitive surface techniques like x-ray photoelectron spectroscopy (XPS) is often required to disentangle the peaks.[2] Sometimes an Auger spectrum can also exhibit "satellite" peaks at well-defined off-set energies from the parent peak. Origin of the satellites is usually attributed to multiple ionization events in an atom or ionization cascades in which a series of

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 122: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 122

electrons are emitted as relaxation occurs for multiple core level holes.[2][3] The presence of satellites can distort the true Auger peak and/or small peak shift information due to chemical bonding at the surface. Several studies have been undertaken to further quantify satellite peaks.[16]

Despite these sometimes substantial drawbacks, Auger electron spectroscopy is a widely used surface analysis technique that has been successfully applied to many diverse fields ranging from gas phase chemistry to nanostructure characterization. Very new class of high-resolving electrostatic energy analyzers recently developed – the face-field analyzers (FFA)[17] can be used for remote electron spectroscopy of distant surfaces or surfaces with large roughness or even with deep dimples. These instruments are designed as if to be specifically used in combined scanning electron microscopes (SEMs). "FFA" in principle have no perceptible end-fields, which usually distort focusing in most of analysers known, for example, well known CMA.

Sensitivity, quantitative detail, and ease of use have brought AES from an obscure nuisance effect to a functional and practical characterization technique in just over fifty years. With applications both in the research laboratory and industrial settings, AES will continue to be a cornerstone of surface-sensitive electron-based spectroscopies.

Thermal analysis

Thermal analysis is a branch of materials science where the properties of materials are studied as they change with temperature. Several methods are commonly used - these are distinguished from one another by the property which is measured:

Differential thermal analysis (DTA): temperature difference Differential scanning calorimetry (DSC): heat difference Thermogravimetric analysis (TGA): mass Thermomechanical analysis (TMA): dimension Dilatometry (DIL): volume Dynamic mechanical analysis (DMA) : mechanical stiffness & damping Dielectric thermal analysis (DEA): dielectric permittivity & loss factor Evolved gas analysis (EGA) : gaseous decomposition products Thermo-optical analysis (TOA) : optical properties

Simultaneous Thermal Analysis (STA) generally refers to the simultaneous application of Thermogravimetry (TGA) and Differential scanning calorimetry (DSC) to one and the same sample in a single instrument. The test conditions are perfectly identical for the TGA and DSC signals (same atmosphere, gas flow rate, vapor pressure of the sample, heating rate, thermal contact to the sample crucible and sensor, radiation effect, etc.). The information gathered can even be enhanced by coupling the STA instrument to an Evolved Gas Analyzer (EGA) like Fourier transform infrared spectroscopy (FTIR) or Mass Spectometry (MS).[1]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 123: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 123

Other, less-common, methods measure the sound or light emission from a sample, or the electrical discharge from a dielectric material, or the mechanical relaxation in a stressed specimen. The essence of all these techniques is that the sample's response is recorded as a function of temperature (and time).

It is usual to control the temperature in a predetermined way - either by a continuous increase or decrease in temperature at a constant rate (linear heating/cooling) or by carrying out a series of determinations at different temperatures (stepwise isothermal measurements). More advanced temperature profiles have been developed which use an oscillating (usually sine or square wave) heating rate (Modulated Temperature Thermal Analysis) or modify the heating rate in response to changes in the system's properties (Sample Controlled Thermal Analysis).

In addition to controlling the temperature of the sample, it is also important to control its environment (e.g. atmosphere). Measurements may be carried out in air or under an inert gas (e.g. nitrogen or helium). Reducing or reactive atmospheres have also been used and measurements are even carried out with the sample surrounded by water or other liquids. Inverse gas chromatography is a technique which studies the interaction of gases and vapours with a surface - measurements are often made at different temperatures so that these experiments can be considered to come under the auspices of Thermal Analysis.

Atomic force microscopy uses a fine stylus to map the topography and mechanical properties of surfaces to high spatial resolution. By controlling the temperature of the heated tip and/or the sample a form of spatially resolved thermal analysis can be carried out.

Thermal Analysis is also often used as a term for the study of Heat transfer through structures. Many of the basic engineering data for modelling such systems comes from measurements of heat capacity and Thermal conductivity.

Thermogravimetric analysis

Thermogravimetric analysis or thermal gravimetric analysis (TGA) is a type of testing that is performed on samples to determine changes in weight in relation to change in temperature. Such analysis relies on a high degree of precision in three measurements: weight, temperature, and temperature change. As many weight loss curves look similar, the weight loss curve may require transformation before results may be interpreted. A derivative weight loss curve can be used to tell the point at which weight loss is most apparent. Again, interpretation is limited without further modifications and deconvolution of the overlapping peaks may be required.

TGA is commonly employed in research and testing to determine characteristics of materials such as polymers, to determine degradation temperatures, absorbed moisture content of materials, the level of inorganic and organic components in materials,

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 124: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 124

decomposition points of explosives, and solvent residues. It is also often used to estimate the corrosion kinetics in high temperature oxidation.

Simultaneous TGA-DTA/DSC measures both heat flow and weight changes (TGA) in a material as a function of temperature or time in a controlled atmosphere. Simultaneous measurement of these two material properties not only improves productivity but also simplifies interpretation of the results. The complementary information obtained allows differentiation between endothermic and exothermic events which have no associated weight loss (e.g., melting and crystallization) and those which involve a weight loss (e.g., degradation).

Equipment

The analyzer usually consists of a high-precision balance with a pan (generally platinum) loaded with the sample. A different process using a quartz crystal microbalance has been devised for measuring smaller samples on the order of a microgram (versus milligram with conventional TGA).[1] The sample is placed in a small electrically heated oven with a thermocouple to accurately measure the temperature. The atmosphere may be purged with an inert gas to prevent oxidation or other undesired reactions. A computer is used to control the instrument. Fig.: A typical TGA system

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 125: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 125

Methodology

Analysis is carried out by raising the temperature of the sample gradually and plotting weight (percentage) against temperature. The temperature in many testing methods routinely reaches 1000°C or greater, but the oven is so greatly insulated that an operator would not be aware of any change in temperature even if standing directly in front of the device. After the data are obtained, curve smoothing and other operations may be done such as to find the exact points of inflection.

A method known as hi-resolution TGA is often employed to obtain greater accuracy in areas where the derivative curve peaks. In this method, temperature increase slows as weight loss increases. This is done so that the exact temperature at which a peak occurs can be more accurately identified. Several modern TGA devices can vent burnoff to an infrared spectrophotometer to analyze composition.

Fig. : Thermogram of calcium oxalate

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 126: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 126

Differential thermal analysis

Differential thermal analysis (or DTA) is a thermoanalytic technique, similar to differential scanning calorimetry. In DTA, the material under study and an inert reference are made to undergo identical thermal cycles, while recording any temperature difference between sample and reference.[1] This differential temperature is then plotted against time, or against temperature (DTA curve or thermogram). Changes in the sample, either exothermic or endothermic, can be detected relative to the inert reference. Thus, a DTA curve provides data on the transformations that have occurred, such as glass transitions, crystallization, melting and sublimation. The area under a DTA peak is the enthalpy change and is not affected by the heat capacity of the sample.

Apparatus

A DTA consists of a sample holder comprising thermocouples, sample containers and a ceramic or metallic block; a furnace; a temperature programmer; and a recording system. The key feature is the existence of two thermocouples connected to a voltmeter. One thermocouple is placed in an inert material such as Al2O3, while the other is placed in a sample of the material under study. As the temperature is increased, there will be a brief deflection of the voltmeter if the sample is undergoing a phase transition. This occurs because the input of heat will raise the temperature of the inert substance, but be incorporated as latent heat in the material changing phase.[2]

Today's Instruments

In today’s market most manufactures no longer make a true DTA but rather have incorporated this technology into a Thermogravimetric analysis(TGA), which provides both mass loss and thermal information. With today’s advancements in software, even these instruments are being replaced by true TGA-DSC instruments that can provide the temperature and heat flow of the sample, simultaneously with mass loss.

Applications

A DTA curve can be used only as a finger print for identification purposes but usually the applications of this method are the determination of phase diagrams, heat change measurements and decomposition in various atmospheres.

DTA is widely used in the pharmaceutical[3][4] and food industries.[5][6][7][8]

DTA may be used in cement chemistry,[9] mineralogical research[10] and in environmental studies.[11]

DTA curves may also be used to date bone remains[12] or to study archaeological materials.[13][14]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 127: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 127

Differential scanning calorimetry

Differential scanning calorimetry or DSC is a thermoanalytical technique in which the difference in the amount of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Both the sample and reference are maintained at nearly the same temperature throughout the experiment. Generally, the temperature program for a DSC analysis is designed such that the sample holder temperature increases linearly as a function of time. The reference sample should have a well-defined heat capacity over the range of temperatures to be scanned.

The technique was developed by E.S. Watson and M.J. O'Neill in 1960,[1] and introduced commercially at the 1963 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. The term DSC was coined to describe this instrument which measures energy directly and allows precise measurements of heat capacity.[2]

Detection of phase transitions

The basic principle underlying this technique is that, when the sample undergoes a physical transformation such as phase transitions, more or less heat will need to flow to it than the reference to maintain both at the same temperature. Whether less or more heat must flow to the sample depends on whether the process is exothermic or endothermic. For example, as a solid sample melts to a liquid it will require more heat flowing to the sample to increase its temperature at the same rate as the reference. This is due to the absorption of heat by the sample as it undergoes the endothermic phase transition from solid to liquid. Likewise, as the sample undergoes exothermic processes (such as crystallization) less heat is required to raise the sample temperature. By observing the difference in heat flow between the sample and reference, differential scanning calorimeters are able to measure the amount of heat absorbed or released during such transitions. DSC may also be used to observe more subtle phase changes, such as glass transitions. It is widely used in industrial settings as a quality control instrument due to its applicability in evaluating sample purity and for studying polymer curing.[3][4][5]

DTA

An alternative technique, which shares much in common with DSC, is differential thermal analysis (DTA). In this technique it is the heat flow to the sample and reference that remains the same rather than the temperature. When the sample and reference are heated identically phase changes and other thermal processes cause a difference in temperature between the sample and reference. Both DSC and DTA provide similar information. Many modern commercial DTA are called heat flux DSC.

DSC curves

The result of a DSC experiment is a curve of heat flux versus temperature or versus time. There are two different conventions: exothermic reactions in the sample shown with a

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119

Page 128: Modern Techniques of Materials Characterisation

Modern Techniques of Materials Characterisation 128

positive or negative peak, depending on the kind of technology used in the experiment. This curve can be used to calculate enthalpies of transitions. This is done by integrating the peak corresponding to a given transition. It can be shown that the enthalpy of transition can be expressed using the following equation:

ΔH = KA

where ΔH is the enthalpy of transition, K is the calorimetric constant, and A is the area under the curve. The calorimetric constant will vary from instrument to instrument, and can be determined by analyzing a well-characterized sample with known enthalpies of transition.[4]

Applications

Fig.:A schematic DSC curve demonstrating the appearance of several common features

Differential scanning calorimetry can be used to measure a number of characteristic properties of a sample. Using this technique it is possible to observe fusion and crystallization events as well as glass transition temperatures Tg. DSC can also be used to study oxidation, as well as other chemical reactions.[3][4][5][6]

Glass transitions may occur as the temperature of an amorphous solid is increased. These transitions appear as a step in the baseline of the recorded DSC signal. This is due to the sample undergoing a change in heat capacity; no formal phase change occurs.[3][5]

As the temperature increases, an amorphous solid will become less viscous. At some point the molecules may obtain enough freedom of motion to spontaneously arrange themselves into a crystalline form. This is known as the crystallization temperature (Tc). This transition from amorphous solid to crystalline solid is an exothermic process, and results in a peak in the DSC signal. As the temperature increases the sample eventually reaches its melting temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The ability to determine transition temperatures and enthalpies makes DSC a valuable tool in producing phase diagrams for various chemical systems.[3]

Compiled by : Mr. B. Ramesh, Associate Professor of Mechanical Engineering,St. Joseph’s College of Engineering, Jeppiaar Trust, Chennai-119