Top Banner
Sound Synthesis Theory Introduction This book covers a sub-field of Music Technology called sound synthesis. Although the tone is generally aimed at musicians and people with little prior knowledge of music systems, there may be some mathematical concepts and programming techniques that are not familiar. The book focuses on synthesis from a digital perspective rather than an analogue one, since it aims to demonstrate the theory of digital synthesis rather than applications to a specific medium or piece of software. The Korg MS10 is an example of an early analogue synthesizer. What is sound synthesis? Sound synthesis is the technique of generating sound, using electronic hardware or software, from scratch. The most common use of synthesis is musical, where electronic instruments called synthesizers are used in the performance and recording of music. Sound synthesis has many applications both academic and artistic, and we commonly use synthesizers and synthesis methods to: Generate interesting and unique sounds or timbres incapable of being produced acoustically.
35

Sound Synthesis Theory

Nov 26, 2015

Download

Documents

An Introduction into Digital Sound Design. These are the basics of Sound Synthesis and the theory behind it.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Sound Synthesis Theory

IntroductionThis book covers a sub-field of Music Technology calledsound synthesis. Although thetoneis generally aimed at musicians and people with little prior knowledge of music systems, there may be some mathematical concepts and programming techniques that are not familiar. The book focuses on synthesis from a digital perspective rather than an analogue one, since it aims to demonstrate the theory of digital synthesis rather than applications to a specific medium or piece of software.

The Korg MS10 is an example of an early analogue synthesizer.

What is sound synthesis?Sound synthesis is the technique of generating sound, using electronic hardware or software, from scratch. The most common use of synthesis is musical, where electronic instruments called synthesizers are used in the performance and recording of music. Sound synthesis has many applications both academic and artistic, and we commonly use synthesizers and synthesis methods to: Generate interesting and unique sounds or timbres incapable of being produced acoustically. Recreate or model the sounds of real-world acoustic instruments or sounds. Facilitate the automation of systems and processes (text-to-speech software, train station P.A.s)

Background and historyOne of the earliest musical synthesizers was Thaddeus Cahill's Teleharmonium, presented to an audience of about nine hundred in 1906. This massive instrument defined a fundamental synthesis technique called additive synthesis which used combinations of pure tones to generate its sounds. Other early synthesizers used technology derived from electronic analog computers, laboratory test equipment, and early electronic musical instruments. However, it was not until the late 1960s that technology had developed far enough for synthesizers to be a commercial success, most notably with Robert Moog'smodularandmini-modularanalogue synthesizers.Since the late 1980s most new synthesizers have been completely digital. At the same time analogue synthesis has also revived in popularity, so in recent years the two trends have combined in the appearance of virtual analog synthesizers, digital synthesizers which model analog synthesis using digital signal processing techniques. Digital synthesizers use digital signal processing (DSP) techniques to make musical sounds. Some digital synthesizers now exist in the form of 'softsynth' software that synthesizes sound using conventional PC hardware. Others use specialized DSP hardware.

Sound in the Time DomainThe appearance and behaviour of sound wavesSound is variation in air pressure and density caused by the propagation of waves through a medium. Human hearing systems sense these waves as they cause the ear drum to move; this movement is transduced into other types of energy inside the ear where it is finally sent to the brain as electrical impulses for analysis. Since sound waves are variations in air pressure over time; it is typical to represent the waves as a varying voltage or a stream of data over time in order to capture, analyse, and reproduce the sounds. When visualising the behaviour of sound waves over time, that is, in thetime domain, we use the termamplitudeto describe the sound level at a point in time. Amplitude is typically represented as a value between1and-1where1and-1represent maximum amplitude of the signal, and0represents zero amplitude.

Figure 1.1. A simple sinusoidal waveform represented as varying amplitude over time.

The waveform inFig. 1.1is called asine waveorsinusoid. Sine waves can be considered the fundamental building blocks of sound and are very smooth-sounding, basic tones. The figure demonstrates that the amplitude varies over time, but that pattern of variance repeatsperiodically. This short, constant period gives the sine wave its particular qualities.

Figure 1.2. A more complex waveform.

The waveform inFig. 1.2is more complicated than the sinusoid in1.1. There are peaks and troughs of different amplitudes, and, although the pattern does repeat itself over time (see if you can find it) it is harder to spot. In the same way that a sine wave behaves in a simple way and sounds simple, this sound behaves with greater complexity and also sounds more complex. For this reason, detailed, complex sounds that change over time often have no discernible features when viewed this close up- there may be no repeating pattern or behaviour which we can use to tell us something about the sound. As you can see fromFig. 1.1and1.2, we are looking at a section of the sound over averyshort time scale; it may be necessary to lengthen the time scale in order to gain some information about it.

Figure 1.3. A time-domain plot of a drum kit over 2 seconds.

InFig. 1.3we are given a look at a sound over the course of about 2 seconds rather than 2 milliseconds. From this perspective, we can see the way the overall sound amplitude changes over time; in particular, the parts with high amplitude can easily be seen as drum hits - they appear suddenly and drop in amplitude very quickly as one would expect from striking a drum head. It may have been very difficult to tell what kind of instrument was being played if this sound was viewed over the range of a few milliseconds. From this, we should conclude that the short time interval and long time interval perspectives both show different types of information and that selecting the right perspective to suit one's needs is important.Sinusoids, frequency and pitchAs indicated inFig. 1.1, the sine wave has a periodic form that repeats everyseconds which is known as the period,cycleorwavelength. The wave also has a positive maximum amplitude,and a negative maximum amplitude,. Thefrequency,, of a sine wave is the number ofcycles per secondand is measured inHertz(Hz). We can obtain the frequency from wavelength from the following equation:

Furthermore, we can express a sine wave with the following mathematical form (with angles in radians). This form may be useful to programmers interested in creating their own controllable sine functions in code:

High frequencies are often associated with words such as 'brightness', whereas low frequencies are often associated with 'depth' or 'bass'. For example, an instrument such as an electric guitar played clean may be called 'bright' or 'sharp' whereas an acoustic double-bass may be referred to as 'dark' and 'warm'. Words like these are not objective quantities we can measure precisely, but are often used in describing thetimbreof a particular sound. The frequencies present in a sound make a large contribution to timbre, and there are many different shades of timbre that can be achieved through combinations of different frequencies that make up a sound. The human hearing system also associates frequency withpitchif a particular frequency is sustained or perceived for a period of time; and we associate particular frequencies with particular notes in the standard Western scale:Cycle length (t)Frequency (Hz)Note name

0.0045220.0A3

0.0040246.94B3

0.0038261.63C4

0.0034293.66D4

0.0030329.63E4

0.0028349.23F4

0.0025392.0G4

0.0022440.0A4

Fig. 1.4. The relationship between wavelength, frequency and pitch.

[edit]Construction and deconstruction of sinusoidsIt has already been mentioned that sine waves can be considered the building blocks of sound. This is possible because a single sine wave can represent a single frequency- if we combine a series of different sinusoids together, we can theoretically recreate the frequency spectrum of an entire sound, be it real or imagined. In the same way, we can also break a complex sound down into its individual frequency components, allowing us to analyse or even control its minutest characteristics. Both these processes are typically simplified due to the incredibly complex nature of real-world or "realistic" sounds and the subsequent demands analysis and modification put on systems performing the task.

Fig. 1.5demonstrates the appearance of two sine waves summed together. The characteristics of both waves are combined in the resultant waveform, which now, due to its increased complexity, develops new features. We can continue this process by adding more and more sine waves, each one representing a single frequency component of our desired sound. This technique is the basis of additive synthesis which is covered later in the book. Furthermore, in the way that we have constructed this sound, it is possible to filter out the two component frequencies from the whole; this is typically done by analysis of the waveform in the frequency domain, which is covered in the subsequent chapter.

Figure 1.5. The summation of two sine waves of different amplitude and frequency, causing their characteristics to be blended into one combined waveform.

Sound in the Digital Domain[edit]IntroductionDigital systems (e.g. computers) and formats (e.g. CD) are clearly the most popular and commonplace methods of storing and manipulating audio. Since the introduction of thecompact discin the early 1980s, the digital format has provided increasingly greater storage capacity and the ability to store audio information at an acceptable quality. Althoughanalogueformats still exist (vinyl,tape), they typically serve a niche audience. Digital systems are ubiquitous in modern music technology. It must be stressed that there is no argument as to whether one domain, be it analogue or digital is superior, but the following provides some desirable features of working with audio in the digital domain. Storage. The amount of digital audio data capable of being stored on a modern hard drive is far greater than a tape system. Furthermore, we can choose the quality of the captured audio data, which relates directly to file size and other factors. Control. By storing audio information in digital, we can perform powerful and complex operations on the data that would be extremely difficult to realise otherwise. Durability. Digital audio can be copied across devices without any loss of information. Furthermore, many systems employerror correction codesto compensate for wear and tear on a physical digital format such as a compact disc.[edit]Digital Analogue ConversionAcoustic information (sound waves) are treated assignals. As demonstrated in the previous chapter, we traditionally view these signals as varying amplitude over time. In analogue systems, this generally means that the amplitude is represented by acontinuousvoltage; but inside a digital system, the signal must be stored as a stream ofdiscretevalues.

Figure 2.1. An overview of the digital analogue conversion process.

Digital data stored in this way has no realphysicalmeaning; one could describe a song on a computer as just anarrayof numbers; these numbers are meaningless unless there exists within the system a process that can interpret each number in sequence appropriately.Fig. 2.1shows an overview of the process of capturing analogue sound and converting it into a digital stream of numbers for storage and manipulation in such a system. The steps are as follows:

1. An input such as a microphone converts acoustic air pressure variations (sound waves) into variations in voltage.2. An analogue to digital converter (ADC) converts the varying voltage into a stream of digital values by taking a 'snapshot' of the voltage at a point in time and assigning it a value depending on its amplitude. It typically takes these 'snapshots' thousands of times a second, the rate at which is known as thesample rate.3. The numerical data is stored on the digital system and then subsequently manipulated or analysed by the user.4. The numerical data is re-read and streamed out of the digital system.5. A digital to analogue converter (DAC) converts the stream of digital values back to a varying voltage.6. A loudspeaker converts the voltage to variations in air pressure (sound).

Although the signal at each stage comes in a different form (sound energy, digital values etc.), the information isanalogous. However, due to the nature of the conversion process, this data may become manipulated and distorted. For instance, low values for sample rates or other factors at the ADC might mean that the continuous analogue signal is not represented with enough detail and subsequently the information will be distorted. There are also imperfections in physical devices such as microphones which further "colour" the signal in some way. It is for this reason that musicians and engineers aim to use the most high-quality equipment and processes in order to preserve the integrity of the original sound throughout the process. Musicians and engineers must consider what other processes their music will go through before consumption, too (radio transmission etc.).[edit]SamplingSound waves in their natural acoustic form can be consideredcontinuous; that is, their time-domain graphs are smooth lines on all zoom factors without anybreaksorjumps. We cannot have these breaks, ordiscontinuitiesbecause sound cannot switch instantaneously between two values. An example of this may be an idealised waveform like asquare wave- on paper, it switches between 1 and -1 amplitude at a point instantaneously; however a loudspeaker cannot, by the laws of physics, jump between two points in no time at all, the cone has to travel through a continuous path from one point to the next.

Figure 2.2. Discrete samples (red) of a continuous waveform (grey).Samplingis the process of taking a continuous, acoustic waveform and converting it into a digital stream of discrete numbers. An ADC measures the amplitude of the input at a regular rate creating a stream of values which represent the waveform in digital. The output is then created by passing these values to the DAC, which drives a loudspeaker appropriately. By measuring the amplitude many thousands of times a second, we create a "picture" of the sound which is of sufficient quality to human ears. The more and more we increase thissample rate, the more accurately a waveform is represented and reproduced.[edit]Nyquist-Shannon sampling theoremThe frequency of a signal has implications for its representation, especially at very high frequencies. As discussed in the previous chapter, the frequency of a sine wave is the number of cycles per second. If we have a sample rate of 20000 samples per second (20 kHz), it is clear that a high frequency sinusoid such as 9000 Hz is going to have fewer "snapshots" than a sinusoid at 150 Hz. Eventually there reaches a point where there are not enough sample points to be able to record the cycle of a waveform, which leads us to the following important requirement:

The sample rate must be greater than twice the maximum frequency represented.

Why is this? The minimum number of sample points required to represent a sine wave is two, but we need at least slightly more than this so that we're not dependent phase (samples at exactly twice the sine wave frequency, the samples may fall on the peaks of the sine wave, or on the zero crossings). It may seem apparent at this time that using just two points to represent a continuous curve such as a sinusoid would result in a crude approximation - a square wave. And, inside the digital system, this is true. However, both ADCs and DACs havelow-pass filtersset at half the sample rate (the highest representable frequency). What this means for input and output is thatanyfrequency above the cut-off point is removed and it follows from this that the crude sine representation - a square wave in theory - becomes filtered down to a single frequency (i.e. a sine wave). From this, we have two mathematical results:

and

Whereis the sample rate,is the highest frequency in the signal.is theNyquist frequency. Frequencies over the Nyquist frequency are normally blocked by filters before conversion to the digital domain when recording; without such processes there would be frequency component foldover, otherwise known asaliasing.[edit]Sampling accuracy and bit depthIt has been established that the higher the sample rate, the more accurate the representation of a waveform in a digital system. However, although there are many reasons and arguments for higher sample rates, there are two general standards:44100samples per second and48000samples per second, with the former being most commonplace. The main consideration for this is the fact that the human hearing range extends, at maximum, to an approximate limit (that varies from person to person) of20000Hz. Frequencies above this are inaudible. Considering the example of 44.1 kHz, we find that the Nyquist frequency evaluates to22050Hz, which is more than the human hearing system is capable of perceiving. There are other reasons for this particular sample rate, but that is beyond the scope of this book.

Figure 2.3. Effects of increased sample rate and bit depth on representing a continuous analogue signal.

There is one more important factor to consider when considering the sampling process:bit depth. Bit depth represents the precision with which the amplitude is measured. In the same way that there are a limited amount of samples per second in a conversion process, there are also a limited amount of amplitude values for a sample point, and the greater the number, the greater the accuracy. A common bit resolution found in most standard digital audio systems (Hi-Fi, Compact Disc) is 16 binarybitswhich allows for a range of 65536 () individual amplitude values at a point in time. Lower bit values result in a greater distortion of the sound - a two bit system () only allows for four different amplitudes, which results in a massively inaccurate approximation of the input signal.Oscillators and Wavetables[edit]Oscillators

Figure 5.1Sine, square, triangle, and sawtooth waveformsAnoscillatoris a repeating waveform with a fundamental frequency and peak amplitude and it forms the basis of most popular synthesis techniques today. Aside from the frequency or pitch of the oscillator and its amplitude, one of the most important features is the shape of its waveform. The time-domain waveforms inFig. 5.1show the four most commonly used oscillator waveforms. Although it is possible to use all kinds of unique shapes, these four each serve a range of functions that are suited to a range of different synthesis techniques; ranging from the smooth, plain sound of a sine wave, to the harmonically rich buzz of a sawtooth wave.Oscillators are generally controlled by a keyboard synthesiser or MIDI protocol device. A key press will result in a MIDI note value which will be converted to a frequency value (Hz) that the oscillator will accept as its input, and the waveform period will repeat accordingly to the specified frequency. From here, the sound can be processed or manipulated in a variety of ways in the synthesizer or program to enrich or modify the sound further.[edit]Generating oscillator waveforms[edit]Sine waveAs mentioned previously, the sine wave can be considered the most fundamental building block of sound. The best way to generate an oscillator which produces this waveform is to make use of an inbuilt library or function in the system concerned. Many programming languages have standard mathematics libraries with many of the trigonometric functions represented. A cycle of a sine wave isradians long and has a peak amplitude of, as shown inFig. 5.2. In a digital system, the generated waves will be a series of equally-spaced values at the sample rate.

Figure 5.2One cycle of a sine wave with phase 2 (radians).

With a sample rate of 44100 cycles per second, and a required cycle length of 1 second, it will take 44100 samples to get from 0 to. In other words, we can determine the steps per cyclefrom cycle length:

Whereis the sample rate. Each step will therefore take the following amount in radians:

or

Where, in the second result, is the same result in terms of frequency. The importance of this is that it is possible to expand it into an algorithm that will be suitable for generating a sinusoidal wave of a user specified frequency and amplitude- effectively the simplest synthesizer possible! A sinusoidal wave can be generated by repeatedly incrementing a phase value by an amount required to reach a desired number oflength cycles a second, at the sample rate. This value can be passed to a sine function to create the output value, between the user specified peak amplitude.Input: Peak amplitude (A), Frequency (f)Output: Amplitude value (y)

y = A * sin(phase)

phase = phase + ((2 * pi * f) / samplerate)

if phase > (2 * pi) then phase = phase - (2 * pi)The most important thing to note about this algorithm is that when the phase value has exceededit will subtract by one whole period. This is to ensure that the function "wraps" round to the correct position instead of going straight back to 0; if a phase incrementoverstepsand resets to 0, undesirable discontinuities would occur, causing harmonic distortion in the oscillatory sound.[edit]Square waveThe square wave cannot be generated from a mathematical function library so easily but once again the algorithm is particularly straightforward since it is constructed from straight line segments. Unlike the sine wave, square waves have many harmonics above theirfundamental frequency, and have a much brighter, sharper timbre. After examining a number of different waveforms it will start to become apparent that waveforms with steep edges and/or abrupt changes and discontinuities are usually harmonically rich.(Note that the following square, sawtooth, and triangle functions are "naive"; they are equivalent to sampling the ideal mathematical functions without first bandlimiting them. In other words, all of the harmonics above the Nyquist frequency will be aliased back into the audible range. This is most obvious when sweeping one of these waveforms into the high frequencies. The aliased harmonics will move up and down the frequency spectrum, making "radio tuning" sounds in the background. A better method to produce waveforms for audio would be additive synthesis, or something like MinBLEPs. A properly-bandlimited waveform will have "jaggies" as you approach the discontinuities instead of piecewise straight lines.)

Figure 5.3One cycle of a square wave with phase 2 (radians).The square wave is constructed in a very similar fashion to the sine wave, and we use the same approach by cycling through a pattern with a phase variable, and resetting once we exceedradians.Input: Peak amplitude (A), Frequency (f)Output: Amplitude value (y)

if phase < pi then y = Aelse y = -A

phase = phase + ((2 * pi * f) / samplerate)

if phase > (2 * pi) then phase = phase - (2 * pi)As is evident there is no reliance on an external function, the square wave can be defined by simple arithmetic, since it essentially switches between two values per cycle. One can expand on this by introducing a new variable which controls the point in the cycle the initial value switches to its signed value; this waveform is known as apulse wave. Pulse waves are similar in character to square waves but the ability to modulate the switching point offers greater sonic potential.[edit]Sawtooth wave

Figure 5.4One cycle of a sawtooth wave with phase 2 (radians).The sawtooth wave is more similar in sound to a square wave although it has harmonic decay and an appropriately "buzzy" timbre. It is constructed out of diagonal, sloping line segments and as such requires a line gradient equation in the algorithm. The mathematical form:

Whererepresents amplitude andis the phase. This can be incorporated into the algorithmic form as follows:Input: Peak amplitude (A), Frequency (f)Output: Amplitude value (y)

y = A - (A / pi * phase)

phase = phase + ((2 * pi * f) / samplerate)

if phase > (2 * pi) then phase = phase - (2 * pi)[edit]Triangle wave

Figure 5.5One cycle of a triangle wave with phase 2 (radians).The triangle wave shares many geometric similarities with the sawtooth wave, except it has two sloping line segments. The algebra is slightly more complex and programmers may wish to consider consolidating the line generation into a new function for ease of reading. Triangle waves contain only odd-integer harmonics of the fundamental, and have a far softer timbre than square or saw waves, which is nearer to that of a sine wave. The mathematical form of the two lines segments are:Fortoradians:

Fortoradians:

The algorithm then, is similar to the previous examples but with the gradient equations incorporated into it. In the example algorithms presented here it is evident that a range of different waveshapes can be designed, but there is the realisation that the shapes can only be described as mathematical functions. Complex shapes may become very demanding due to the increased processing power for more complicated mathematical statements.Input: Peak amplitude (A), Frequency (f)Output: Amplitude value (y)

if phase < pi then y = -A + (2 * A / pi) * phaseelse y = 3A - (2 * A / pi) * phase

phase = phase + ((2 * pi * f) / samplerate)

if phase > (2 * pi) then phase = phase - (2 * pi)[edit]WavetablesThere may be a situation or a desire to escape the limitations or complexity of defining an oscillatory waveform using mathematical formulae or line segments. As mentioned before, this could be a concern for processing power, or simply the fact that it would be easier to specify the shape through an intuitive graphical interface. In cases like these, musicians and engineers may usewavetablesto be their source oscillator. Wavetables are popular in digital synthesis applications because accessing a block of memory is computationally faster than calculating values using mathematical operations.

Figure 5.6The basic structure of a wavetable oscillator.The wavetable is in essence an array ofNvalues, with values 1 through to N representing one whole cycle of the oscillator. Each value represents an amplitude at a certain point in the cycle. Wavetables are often displayed graphically with the option for the user to draw in the waveshape he or she requires, and as such it represents a very powerful tool. There is also the possibility of loading a pre-recorded waveshape as well; but note that a wavetable oscillator is only a reference table for one cycle of a waveform; it is not the same as asampler. The wavetable has associated with it aread pointerwhich cycles through the table at the required speed and outputs each amplitude value in sequence so as to recreate the waveform as a stream of digital values. When the pointer reaches the last value in the table array, it will reset to point one and begin a new cycle.[edit]Using wavetablesThe size of the wavetable and the sampling rate of the system determine what the fundamental frequency of the wavetable oscillator will be. If we have a wavetable with1024individual values and a sampling rate of44.1kHz, it will take:

seconds to complete one cycle. As previously shown, frequency can be determined from, giving us a fundamental frequency of:

Hz.

It therefore becomes apparent that, in order to change the frequency of our oscillator we must change either the size of the wavetable or the sampling rate of the system. There are some real problems with both approaches: Changing thewavetable sizemeans switching to a different sized wavetable with the same, updated waveform. This would require dozens, hundreds, or even thousands of individual wavetables, one for each pitch, which is obviously totally inefficient and memory-consuming. Digital systems, especially mixers that combine several synthesized or recorded signals, are designed to work at a fixedsampling rateand to make sudden changes to it is once again inefficient and extremely hard to program. The sample rate required to play high frequencies with an acceptable level of precision becomes very high and puts high demand on the system.One of the most practical and widely-used approaches to playing a wavetable oscillator at different frequencies is to change the size of the "steps" that the read pointer makes through the table. As with our previous example, our 1024-value wavetable had a fundamental frequency of 43.5 Hz when every point in the table was outputted. Now, if we stepped through the table every5values, we would have:

Hz.

It follows from this a general formulae for calculating the required step size,Sfor a given frequency,f:

WhereNis the size of the wavetable andis the sample rate. It is important to note that because the step size is being altered, the read pointer may not land exactly on the final table valueN, and so it must "wrap around" in the same fashion as the functionally generated waveforms in the earlier section. This can be done by subtracting the size of the table from the current pointer value if it exceeds N; the algorithmic form of which can easily be gleaned from the examples above.

[edit]Frequency precision and interpolationWe must consider that some frequency values may generate a step size that has a fractional part; that is, it is not an integer but a rational number. In this case we find that the read pointer will be trying to step to locations in the wavetable array that do not exist, since each member of the array has an integer-valued index. There may be a value at position 50, but what about position 50.5? If we desire to play a frequency that uses a fractional step size, we must consider ways to accommodate it: Truncation and rounding. By removing the fraction after the decimal point we reduce the step size to an integerthis istruncation. For instance, 1.3 becomes 1, and 4.98 becomes 4.Roundingis similar, but chooses the closest integer3.49 becomes 3, and 8.67 becomes 9. For simple rounding, if the value after the decimal point is less than 5, we round down (truncate), otherwise we round up to the next integer. Rounding may be supported in the processor at no cost, or can be done by adding 0.5 to the original value and then truncating to an integer. For wavetable synthesis, the only difference between truncation and rounding is a constant 0.5 sample phase shift in the output. Since that is not detectableand neither an improvement nor a detrimenta decision between truncation and rounding comes down to whichever is more convenient or quicker. Linear interpolation. This is the method of drawing a straight line between the two integer values around the step location and using the values at both points to generate an amplitude value that interpolates between them. This is a more computationally demanding process but introduces greater precision. Higher order interpolation. With linear interpolation consideredfirst orderinterpolation (and truncation and rounding consideredzero orderinterpolation) there are many higher-order forms that are commonly usedcubic Hermite, Lagrangian, and others. Just as linear interpolation requires two points for the calculation, higher orders require even more wavetable points to be used in the calculation, but produce a more accurate, lower distortion result.Sinc interpolationcan be made arbitrarily close to perfect, at the expense of computation time.By increasing the wavetable size, the precision of the above processes becomes greater and will result in a closer fit to the idealised, intended curve. Naturally, large wavetable sizes result in greater memory requirements. Some wavetable synthesizer hardware designs prefer table sizes that are powers of two (128, 256, 512, 1024, 2048, etc.), due to shortcuts that exploit the way in which digital memory is constructed (binary).

Additive Synthesis[edit]IntroductionAs previously discussed inSection 1, sine waves can be considered the building blocks of sound. In fact, it was shown in the 19th Century by the mathematician Joseph Fourier that any periodic function can be expressed as a series of sinusoids of varying frequencies and amplitudes. This concept of constructing a complex sound out of sinusoidal terms is the basis for additive synthesis, sometimes calledFourier synthesisfor the aforementioned reason. In addition to this, the concepts of additive synthesis have also existed since the introduction of the organ, where different pipes of varying pitch are combined to create a sound or timbre.

Figure 6.1. Additive synthesis block diagram.A simple block diagram of the additive form may appear like inFig. 6.1, which has a simplified mathematical form based on the Fourier series:

Whereis an offset value for the whole function (typically 0),are the amplitude weightings for each sine term, andis the frequency multiplier value. With hundreds of terms each with their own individual frequency and amplitude weightings, we can design and specify some incredibly complex sounds, especially if we can modulate the parameters over time. One of the key features of natural sounds is that they have a dynamic frequency response that does not remain fixed. However, a popular approach to the additive synthesis system is to use frequencies that are integer multiples of the fundamental frequency, which is known asharmonic additive synthesis. For example, if the first oscillator's frequency,represents the fundamental frequency of the sound at 100 Hz, then the second oscillator's frequency would be, and the thirdand so on. This series of sine waves produces an even "harmonic" sound that can be described as "musical". Oscillator frequency relationships that are not integer related, on the other hand, are called "inharmonic" and tend to be noisier and take on the characteristics of bells or other percussive sounds.

[edit]Constructing common harmonic waveforms in additive synthesis

Figure 6.2. The first four terms of a square wave constructed from sinusoidal components (partials).If we know the amplitude weightings and frequency components of the firstsinusoidal components orpartialsof a complex waveform, we can reconstruct that waveform using an additive system withoscillators. The popular waveforms square, sawtooth and triangle are harmonic waveforms because the constituent sinusoidal components all have frequencies that are integer multiples of the fundamental. The property that distinguishes them in this form is that they all have uniqueamplitude weightingsfor each sinusoid.Fig. 6.2demonstrates the appearance of the time-domain waveform as a set of sines at unique amplitude weightings are added together; in this case the form begins to approximate a square wave, with the accuracy increasing with each added partial. Note that to construct a square wave we only include odd numbered harmonics- the amplitude weightings for,,etc. are 0. Below is a table that demonstrates the partial amplitude weightings of the common waveshapes:WaveshapeGeneral rule

Sine100000000

Square101/301/501/701/9for odd.

Triangle10-1/901/250-1/4901/81for odd, alternating + and -.

Sawtooth11/21/31/41/51/61/71/81/9

A conclusion you may draw fromFig. 6.2and the table is that it requires a large amount of frequency partials to create a waveform that closely approximates the idealised mathematical forms of the waveforms introduced inSection 5. For this reason, it should be apparent that additive synthesis techniques are perhaps not the best method for producing these forms. The strengths of additive synthesis lie in the fact that we can exert control over every partial component of our sound, which can produce some very intricate and wonderful results. With the constant modification of the frequency and amplitude values of each oscillator, the possibilities are endless. Some examples of ways to control the weightings and frequencies of each component oscillator are illustrated: Manual control. The user controls a bank of oscillators with an external control device (typically MIDI), tweaking the values in real time. More than one person can join in and change / alter the timbre to their whims. External data. Digital information from another source is taken and converted into appropriate frequency and amplitude values. The varying data source will then effectively be in 'control' of the timbral outcomes. Composers have been known to use data from natural sources or pieces derived from interesting geometric, aleatoric and mathematical models. Recursive data. Given a source set of values and a set of algorithmic rules, the control parameters reference the previous value entered into the system to determine the result of the next one. Users may wish to "interfere" with the system to set the process on a new path. SeeMarkov chains.There is, however, the major consideration of computational power, however - complex sounds may require many oscillators all operating at once which will put major demand on the system in question.

[edit]Additive resynthesisInSection 1it was mentioned that just as it is possible to construct waveforms using additive techniques, we can analyse and deconstruct waveforms as well. It is possible to analyse the frequency partials of a recorded sound and then resynthesize a representation of the sound using a series of sinusoidal partials. By calculating the frequency and amplitude weighting of partials in the frequency domain (typically using a Fast Fourier transform), an additive resynthesis system can construct an equally weighted sinusoid at the same frequency for each partial. Older techniques rely on banks of filters to separate each sinusoid; their varying amplitudes are used ascontrol functionsfor a new set of oscillators under the user's control. Because the sound is represented by a bank of oscillators inside the system, a user can make adjustments to the frequency and amplitude of any set of partials. The sound can be 'reshaped' - by alterations made to timbre or the overall amplitude envelope, for example. A harmonic sound could be restructured to sound inharmonic, and vice versa.Subtractive SynthesisWhereas additive synthesis is the process of combining individual sinusoidal partials to construct a complex sound,subtractive synthesisis essentially the reverse of this process. By starting with a harmonically (or partially) rich sound, a subtractive system will filter and modify the signal to reduce it to a desired form. By doing this, one can use one or two oscillators instead of a bank of 10 to achieve similar sonic results. Subtractive synthesis is an extremely popular method of synthesis that has been employed in hardware and software synthesizers since their popularity skyrocketed in the 1970s. This method of synthesis can be found traditionally in old modular or compact analogue synthesizers as well as modern virtual analogue models and software synthesizers.Fig. 7.1illustrates a simple block diagram form of a subtractive system.

Figure 7.1Block diagram of a simplified subtractive synthesis system.

Modulation Synthesis[edit]IntroductionWhen we talk aboutmodulationfrom an audio synthesis point of view, we refer to a time-varying signal (thecarrier) being affected in some way by another (themodulator). Modulation can be found in a range of different sound effects and synthesis techniques and some of these effects occur naturally and help us to identify certain types of sound; for instance the commonly found performance styles oftremolo(modulation of amplitude) andvibrato(modulation of frequency) that are used in many stringed instruments are examples of this. Modulation is typical in synthesis because it enriches the character of the sound and also adds to the variance in timbre / character over time which is so often found in nature.

Figure 8.1Unipolar (between 0 and 1) and Bipolar (between -1 and +1) waveforms.

In the two basic methods of modulation synthesis that occur,ring modulationandamplitude modulation, there are two unique types of signal that occur in each method:bipolarandunipolarsignals. A bipolar signal is the type of signal we have been examining in previous chapters, it has both a negative and positive amplitude and the waveform generally "rests" around zero in a time-domain plot. A unipolar signal is a bipolar signal that has beenconstant-shifted, that is, a constant value added to the overall signal to shift it into a range above zero, typically between 0 and 1. The reason for these two different types of signal follows.

[edit]Ring ModulationRing modulation is the multiplication of two bipolar audio signals by each other. Each value of a carrier signal,C, is multiplied by a modulator signal,M, to create a newring-modulatedsignal,R:

There are different ways to implement this; most likely it is suitable to simply multiply the two signals, but alternatively the amplitude input of a carrier module can be the output of a modulator module, as well. The frequency of the modulator signal also plays an important role in the characteristic of the RM signal. From this, we achieve the following important result:

If the frequency ofMis under20Hz or so, we will generally perceive the tremolo effect, where the amplitude ofCwill vary at the frequency ofM. Periodic signalsMwith a frequency below20Hz are calledlow-frequency oscillators.

When the frequency ofMis in the audible range, that is, 20 Hz or more, there is an effect on thetimbreof the signal. The variations in amplitude become fast enough that the modulator generates a set offrequency sidebands. With two sine waves as carrier and modulator, RM will generate a frequency spectrum containing two sidebands, which are thesumanddifferenceof the carrier and modulator frequencies. When this occurs, the actual carrier frequency is removed from the spectrum, leaving twoharmonicsidebands (if the frequency ofCandMare an integer ratio to one another) or twoinharmonicsidebands (if the ratio is otherwise). For instance, if the carrier is 900 Hz and the modulator is 500 Hz, we will get two sidebands; one at400Hz (900 - 500) and one at1400Hz (900 + 500).IfCandMare not sine waves (i.e. their waveforms are more complex) then the resultant signal will contain more than one or many different sidebands at different frequencies and amplitudes, indicating a more complex sound.Figure 8.2illustrates two examples of ring modulation - the original example with frequencies C = 900 and M = 500 but also when C = 400 and M = 1000, which introducesnegative frequenciesinto the spectrum. This results in a "wrapping" phenomenon where the difference sideband ofCandMis-600Hz! As a result, we find that a difference sideband occurs at600Hz, and that is true for any negative frequency; a sideband will occur at its the unsigned (positive) frequency.

Figure 8.2Frequency-domain spectra of ring modulated signals.a)a C frequency of 900 and M of 500 andb)a C frequency of 400 and M of 1000, showing the emergence of negative frequencies into the audible spectrum.

[edit]Amplitude ModulationAmplitude modulation is similar to ring modulation, except it works with a unipolar modulator. The amplitude of the carrier signal,C, is modulated by the unipolar modulatorM. At infrasonic (below 20hz) frequencies, the modulator serves to attenuate or boost the amplitude of the signal. A simple example of this would be a typical ADSR envelope, which scales the amplitude of the carrier signal over time from 0 to 1 and back down again. However, for synthesis techniques we generally consider the effect that periodic modulator signals above 20 Hz have on a carrier. Once again, the mathematical form is simply the product of two signals:

WhereCis the carrier signal andMis a unipolar modulator, typically set to vary between values of 0 and 1. Without mention of the unipolar modulator, this technique would appear to be identical to ring modulation. Like ring modulation, amplitude modulation produces a pair of sidebands for every sinusoidal component in the carrier and modulator, and these sidebands are generated at frequencies the sum and difference of the two signal frequencies. The difference between the two techniques is highlighted here:The difference between amplitude modulation and ring modulation is that in AM the carrier frequency is preserved and the sidebands generated are athalfthe amplitude of the carrier amplitude.

Figure 8.3The frequency-domain spectrum of an amplitude-modulated signal. The two sidebands are sum and difference frequencies of the carrier and modulator,CandM, and have amplitudes athalfthe amplitude of the carrier signal.One of the advantages of AM, like its cousin, RM, is that using just two signals or oscillators, we can create somepartiallyrich signals. Using a harmonically dense signal such as a square wave oscillator can create a wealth of sidebands from a minimum of control parameters and computation. Control over these generated partials may not, however, be as detailed and straightforward as techniques such as additive synthesis. As a result, we find that AM and RM is used more often in signalprocessingthan signalgeneration.

Figure 8.4A time-domain plot of an amplitude-modulated signal.M(t), the sinusoidal 10hz modulator signal,C(t)the sinusoidal 220 Hz carrier signal, andA(t)the two combined using amplitude modulation.Expanding on amplitude modulation requires us to introduce more parameters and elements to the technique to give it some "weight" with other, more popular techniques. For instance, we can introduce into the system a unipolar low-frequency oscillator which is set to control the amplitude of the modulator; by changing the amplitude of the modulator we are modifying what is known as themodulation index; a factor which controls the strength of the AM sidebands. In addition to modifying the amplitude of the modulator, we can also modify thefrequencyof the modulator. As you may expect, this causes a shift in the frequencies of the sidebands generated through the AM process and can, carefully controlled, produce some interesting, dynamic sounds that are hard to produce with other techniques. Breaking away from sinusoidal oscillators in both cases of carrier and modulator, and even the modulation of the modulation index is one of the first steps to exploring this technique; try experimenting with the waveshapes introduced in the previous chapters.

Physical Modelling Synthesis[edit]IntroductionPhysical Modelling synthesisis not confined to a particular technique, but rather represents a family of approaches to synthesis. Physical modelling systems attempt tomodelthe propagation of sound energy in a system, typically starting with a mathematical model or algorithm that is often recursive. Physical modelling techniques often start by attempting to replicate the basic structure of an acoustic instrument and hence mimic the sound it makes when 'excited'. This excitation normally comes in the form of an initialimpulse, typically a short burst of noise.The main advantage of this system is that one can generate convincing acoustic sounds, such as a plucked string or drum hit. Aside from this, other advantages include the ability to tweak parameters (e.g. instrument body density, string length) in order to create specific types of instrument from agenericmodel. In the extremes, one can specify strange and unrealistic parameters in order to create models of instruments that are impossible to realise physically! Physical modelling systems typically employdelay linesin their structure and this can mean that the output of the system can go out of control and create unwanted feedback.

[edit]The Karplus-Strong AlgorithmAlexander Strong and Kevin Karplus developed software and hardware implementations for this algorithm, naming it "Digitar" synthesis.

Synthesis software / tools[edit]AnalysisAudacity- Wave editor with analysis and basic synthesis / modification tools.FreeSPEAR- Spectrum analysis and resynthesis tool.Free

[edit]Hosts (VST, MIDI etc.)VSTHost- Host for VST plugins. Windows.Free

[edit]Modular / environmentKyma- Sound design environment.CommercialMax / MSP- Audio processing, synthesis and analysis environment.CommercialReaktor- Audio processing, synthesis and analysis environment.CommercialSynthedit- Modular synthesizer / VST plugin builder.CommercialPuredata- Audio processing, synthesis and analysis environment.Free

[edit]ProgrammingSuperCollider- Object orientated audio programming language.FreeCSound- Audio programming language.FreeNyquist- Audio programming language.Free[edit]SynthesizersSynFactory- Modular software synthesizer.Free

Links and Bibliography[edit]LinksSound On Sound Magazine- Many articles on synthesis available online to view.The Theory and Techniques of Electronic Music- Electronic music manual written by Miller Puckette, inventor of Puredata.Mathematics Of The Discrete Fourier Transform- Electronic manual on the application of mathematics to many areas of digital sound synthesis / processing by Julius O. Smith, inventor of digital waveguide synthesis.

[edit]Related / Useful WikibooksControl Systems- Inter-disciplinary engineering text that analyzes the effects and interactions of mathematical systems.

[edit]BibliographyBoulanger, R. 2000.The CSound Book. London: MIT Press.ISBN: 978-0262522618Loy, G. 2006.Musimathics: Mathematical Foundations Of Music, Vol. 1. MIT Press.ISBN: 978-0262122825Loy, G. 2007.Musimathics: Mathematical Foundations Of Music, Vol. 2. MIT Press.ISBN: 978-0262122856Roads, C. 1996.The Computer Music Tutorial. London: MIT Press.ISBN: 978-0-262-68082-0Roads, C. 2002.Microsound. London: MIT Press.ISBN: 978-0-262-18215-7