The Fourier Integral / Transform Explained

Magic is real, and it all comes from the Fourier Integral.  But one doesn’t become a wizard without a little reading first – so, the purpose of this article is to explain the Fourier Integral theoretically and mathematically.

Before reading any further, it is important to first understand this: in mathematics, there is a rule that states that any periodic function of time may be “reconstructed” exactly from the summation of an infinite series of harmonic sine-waves.  The generalized theory itself is referred to as a “Fourier Series.”  For use with arbitrary electronic time-domain signals of period $T_o$, it may be expressed as:

$f(t) = a_0 + \displaystyle\sum_{n=1}^{\infty}a_ncos(nw_ot) + b_nsin(nw_ot)$

over the range:

$t_0 \le t \le t_0 + T_0$

where:

$a_0$ is the magnitude of the 0th harmonic

$a_n$ represents the magnitude of the nth harmonic of cosine wave components

$b_n$ represents the magnitude of the nth harmonic of sine wave components

$w_o$ is the fundamental frequency

$t$ is the variable that represents instances in time

$n$ is the variable that represents the specific harmonic, and is always an integer

This monumental discovery was first announced on December 21, 1807 by historic gentleman Baron Jean-Baptiste-Joseph Fourier.

In order to go from the Fourier Integral to the Fourier Transform, it is necessary to express the previous Fourier Series as a series of ever-lasting exponential functions.  Using an orthogonal basis set of signals described by $e^{jnw_ot}$ of magnitude $D_n$, we now write the Fourier Series as:

$f(t) = \displaystyle\sum_{n=- \infty}^{\infty}D_ne^{jnw_ot}$

where $j$ is $\sqrt{-1}$.

What is the Fourier Integral?

The Fourier Integral, also referred to as Fourier Transform for electronic signals, is a mathematical method of turning any arbitrary function of time into a corresponding function of frequency.  A signal, when transformed into a “function of frequency”, essentially becomes a function that expresses the relative magnitudes of each harmonic of a Fourier Series that would be summed to recreate the original time-domain signal. To see this, observe the following figures:

In order to rebuild a square wave with sines and cosines only, it is necessary to determine the magnitudes of each harmonic used in the Fourier Series, or rather, the Fourier Integral (for continuous time-domain signals).  The relative magnitudes of these needed harmonics can be displayed graphically as a function of frequency (widely known as a signal’s frequency spectrum):

Though the recreation of a signal using an infinite series of sines and cosines is impossible to achieve in the lab, one may get very close.  Close enough that the most advanced lab equipment wouldn’t be able to calculate the error due to tolerance specifications.  This allows engineers to use Fourier Analysis to work with time-domain signals, such as radio signals, television signals, satellite signals and just about any signal you can think of.  By viewing a signal according to what frequency components are contained within it, electrical engineers may concern themselves with magnitude changes in frequency only, and may no longer worry about the signal’s magnitude-changes through time.  Not only is this a very practical concept when working in the lab, it also greatly simplifies the mathematics behind signal conditioning in general.  In fact, the entirety of the Communications industry owes its success to the Fourier Transform for not only antenna design, but a plethora of other applications.

The math behind the Fourier Transform

The derivations that follow have been summarized from Chapter 4 of the textbook “Signal Processing and Linear Systems” by B.P. Lathi, a fine book for students of Communication Systems.

We begin by considering some arbitrary, aperiodic time-domain signal.  An example of this kind of wave would be the output of a microphone after a man speaks a few words into it.  For the actual signal generated by the changes in voltage as the man spoke, we can use Fourier Analysis to describe it as a summation of exponential functions if we instead desire to reconstruct a periodic signal composed of the same voice signal repeating every $T_0$ seconds.  For an accurate description, it is important that $T_0$ is long enough such that the repeating arbitrary signals do not overlap.  However, if we let $T_0$ approach $\infty$, then this “periodic” signal is simply just the voice signal (or, any general arbitrary function) in time we wanted to describe initially.  Mathematically, we express:

$\displaystyle\lim_{T_0\to\infty}f_{T_0}(t) = f(t)$

where $f(t)$ is the time-domain function we wish to apply the Fourier Transform on (here, the arbitrary “voice” signal).  For the above equation to be true, $f_{T_0}$ is equal to:

$f_{T_0}(t) = \displaystyle\sum_{n=- \infty}^{\infty}D_ne^{jnw_ot}$

where:

$D_n = \frac{1}{T_0} \int^\frac{T_0}{2}_\frac{-T_0}{2} f_{T_0}(t)e^{-jnw_ot}dt$

$w_o = \frac{2\pi}{T_o}$

It is important to note here that in practice, the shape (aka “envelope”) of a signal’s frequency spectrum is what is of main interest, and the magnitude of the components within the spectrum comes secondary.  This is because amplifiers and other signal-conditioning circuits may be built to alter the magnitude in any way one wishes, and will not affect signal frequencies (so long as the circuits are LTI systems).  Analyzing the envelope of a signal’s Fourier Transform allows one to use intuitive and mathematically-simplified approaches to signal-processing in general, which we shall see later.  For this reason (and also as $T_0$ approaches $\infty$) let:

$F(w) = \int^{\infty}_{-\infty} f(t)e^{-jw_ot}dt$

Notice that $F(w)$ is simply $D_n$ without the constant multiplier $\frac{1}{T_0}$, such that:

$D_n = \frac{1}{T_0}F(nw_o)$

which implies that $f_{T_0}(t)$ may be written:

$f_{T_0}(t) = \displaystyle\sum_{n=- \infty}^{\infty}\frac{F(nw_o)}{T_0}e^{jnw_ot}$

Observation of this fact reveals insight: The shorter the period, $T_0$, the larger the magnitude of the coefficients.  But, on the other hand, as $T_0 \rightarrow \infty$, the magnitudes of every frequency component approaches $0$ – which is why engineers choose to analyze spectrum envelopes.  So, instead of visualizing absolute frequency magnitudes, instead consider that the frequency spectrum simply expresses the magnitude-density per unit of bandwidth, aka Hz. And since:

$T_0 = \frac{2\pi}{w_o}$

then:

$w_o = \frac{2\pi}{T_0}$

and:

$\Delta w_o = \frac{2\pi}{\Delta T_0}$

so:

$f_{T_0}(t) = \displaystyle\sum_{n=- \infty}^{\infty}\frac{F(n\Delta w_o)\Delta w_o}{2\pi}e^{jn\Delta w_ot}$

In the limit as $T_0 \rightarrow \infty$ we see:

$f(t) = \displaystyle\lim_{T_0\to\infty}f_{T_0}(t) = \frac{1}{2\pi} \int^{\infty}_{-\infty}F(w)e^{jwt}dw$

which is referred to as the Fourier Integral$F(w)$ is referred to as the Fourier Transform of the original aperiodic function $f(t)$, and we express this concept as:

$f(t) \Leftrightarrow F(w)$

A fourier transform example

This example is from the same textbook as the previous derivation, and can be found on page 239.

Find the Fourier Transform of: $e^{-at}u(t)$ where $a$ is an arbitrary constant.

To do this, we apply the Fourier Integral to the function $e^{-at}u(t)$ as follows:

$F(w) = \int^{\infty}_{-\infty}e^{-at}u(t)e^{-jwt}dt$

Because of the $u(t)$ factor, we only integrate from $0 \rightarrow \infty$.  We simplify for:

$F(w) = \int^{\infty}_{0} e^{-(a+jw)t}dt = \frac{-1}{a+jw}e^{-(a+jw)t}\mid^{\infty}_{0}$

Also, we know that $|e^{-jwt}| =Re[e^{-jwt}] = 1$.  So, for $a > 0$, as $t \rightarrow \infty$:

$e^{-(a+jw)t} = e^{-at}e^{-jwt} = 0$

So:

$F(w) = \frac{-1}{a+jw}e^{-(a+jw)t}\mid^{\infty}_{0} = 0 - \frac{-1}{a+jw}e^{-(a+jw)0} = \frac{1}{a+jw}$

for: $a> 0$

Useful Fourier Transform Properties

The relationship between $f(t)$ and $F(w)$ exhibit beautiful symmetry that help one to develop an intuitive approach to signal analysis.  Among all the concepts within electrical engineering, the properties between a time-domain function and its Fourier transform are among the most important to understand.  Observe these following properties that apply for all $f(t) \Leftrightarrow F(w)$:

1.) Fourier Transform: Gives an equation to solve for the time-domain function $f(t)$ from $F(w)$.

$f(t) = \frac{1}{2\pi} \int^{\infty}_{-\infty}F(w)e^{jwt}dw$

2.) Inverse Fourier Transform: Gives an equation to solve for the frequency-domain function $F(w)$ from $f(t)$.

$F(w) = \frac{1}{2\pi} \int^{\infty}_{-\infty}f(t)e^{-jwt}dw$

3.) Symmetry Property: For a given pair of a time-domain signal and its Fourier transform, we note that the time-domain envelope is different in shape when compared to the frequency-domain envelope.  However, switching the shape of the two functions with respect to domain (time or frequency), will result in the same envelopes except with different scaling coefficients.  For example, a square pulse through time has a frequency spectrum described by a sinc function, and a sinc function through time results in a frequency spectrum described by a square pulse.

$F(t) \Leftrightarrow 2\pi f(-w)$

4.) Scaling Property: Time-scaling a time-domain signal (by a constant $a$) will result in a magnitude-and-frequency-scaling of the signal’s corresponding frequency spectrum.  Also signifies that the longer a signal exists through time, the narrower the bandwidth (collection of frequency components needed to rebuild the signal) of its frequency spectrum.

$f(at) \Leftrightarrow \frac{1}{|a|}F(\frac{w}{a})$

5.) Time-Shifting Property: By time-shifting, or delaying/advancing, a time-domain signal results in a phase delay in each of the ever-lasting frequency-components needed to rebuild it.  The frequency spectrum is otherwise unchanged – only the phase of each component is shifted.

$f(t-t_0) \Leftrightarrow F(w)e^{-jwt_0}$

6.) Frequency-Shifting Property: Multiplying a time-domain signal by a sinusoidal signal of some frequency $w_0$, a method which begets amplitude and frequency modulation (AM/FM), results in the frequency spectrum remains unchanged except for a shift in frequency for each individual frequency component by $w_0$.

$f(t)e^{jw_0t} \Leftrightarrow F(w-w_0)$

Lastly, these tables (table 1, table 2) can greatly simplify Fourier analysis when used in signal processing.

How to Determine if a Vector Set is Linearly Independent

Pre-(r)amble

The odds favor that by the time someone has reached this article, myself included, they have spent at least the briefest of moments (frustratedly?) questioning the practical applications for  linear combination, linear independence and linear math. In a sentence, these concepts allow us to mathematically understand and represent multidimensional coordinate systems. If you’re looking for a quick explanation for a homework problem feel free to skim through the bolded topics for help in specific areas of concern. Otherwise, here’s something to think about.  Imagine maneuvering in three dimensional space. An instantaneous position can be described using a three dimensional coordinate system. When following a consistent pattern of movement, an instantaneous position can be described with a fourth dimension, time. Suppose you have just landed the snowball throw of a lifetime and hit a target moving across your view plane, increasing the distance between you, and uphill. You have properly estimated the intersection of two moving objects in four dimensions. This is not always an easy task to execute. Now make this throw using a fifth dimension. Most people can’t comprehend the existence of a fifth dimension without having to understand how to maneuver in it. With linear math we can attempt to understand and represent the relationships between these dimensions.

Important Definitions

Linear Independence
A set of linearly independent vectors {$V_1, V_2, . . . , V_k$} has ONLY the zero (trivial) solution <$x_1, x_2, . . . , x_k$> $=$ <$0, 0, . . . , 0$>  for the equation $x_1 * V_1 + x_2 * V _2 + . . . + x_k * V_k = 0$

Linear Dependence
Alternatively, if $x_1, x_2, . . . ,$ or $x_k \neq 0$, the set of vectors is said to be linearly dependent.

Determining Linear Independence

By row reducing a coefficient matrix created from our vectors {$V_1, V_2, . . . , V_k$}, we can determine our <$x_1, x_2, . . . , x_k$>. Then to classify a set of vectors as linearly independent or dependent, we compare to the definitions above.

Example
Determine if the following set of vectors are linearly independent:

$\begin{bmatrix} 1\\1\\1\end{bmatrix}$ , $\begin{bmatrix} 2\\2\\2\end{bmatrix}$ , $\begin{bmatrix} 0\\0\\5\end{bmatrix}$ , $\begin{bmatrix} 1\\2\\3\end{bmatrix}$

Setting up a Corresponding System of Equations and Finding it’s RREF Matrix

We need to understand that our vectors can be represented with a system of equations all equaling zero to satisfy the equation $x_1 * V_1 + x_2 * V _2 + . . . + x_k * V_k = 0$ from our definition of linear independence. These equations will look something like this:

$1*x_1 + 2*x_2 + 0*x_3 + 1*x_4 = 0$
$1*x_1 + 2*x_2 + 0*x_3 + 2*x_4 = 0$
$1*x_1 + 2*x_2 + 5*x_3 + 3*x_4 = 0$

Notice that I have simply taken the coefficients from the given vectors and multiplied them by four variables (the number of variables will equal the number of vectors in the given set). They have been set equal to zero to allow us to test for linear independence. From here, create a coefficient matrix and perform row operations to reduce the matrix to reduced row echelon form (rref) .

rref $\left[ \begin{array}{cccc|c} 1&2&0&1&0\\1&2&0&2&0\\1&2&5&3&0\end{array}\right]$ = $\left[ \begin{array}{cccc|c} 1&2&0&0&0\\0&0&1&0&0\\0&0&0&1&0\end{array}\right]$

Finding the Solution of the RREF Matrix

Finding the solution of the rref matrix may be the more difficult step in this process. However, it may become trivial following a few simple steps.

1) Identify the free variables in the matrix. Free variables are non-zero and located to the right of pivot variables. Pivot variables are the first non-zero entry in each row and since we have taken the rref of our matrix, all of the pivot variable coefficients are 1. By locating all free variables (or by eliminating all pivot variables) we find that $x_2$ is our only free variable.

2) Write free variables into your solution. The variable $x_2$ can be written into our solution vector as itself but we will represent it with another variable name (i.e. $t$) so that our solution is in parametric form. Multiple free variables are represented with multiple variables names (i.e. $s, t$). After this step your solution vector should look like this: <$x_1, x_2, x_3 , x_4$> $=$ <$? , t, ? , ?$>.

3) Solve for pivot variables. The pivot variables should either be constant (i.e. 0, 6) or a function of your free variables (i.e. $3 t-4$ ). From the rref matrix we can see that $x_1 = -2 t$, $x_3 = 0$, and $x_4 = 0$.

4) Complete the solution vector. Placing the values we just calculated into our solution vector: <$x_1, x_2, x_3 , x_4$> $=$ <$-2 t , t, 0 , 0$>

Finally,

Since not all of our $X_i = 0$, the given set of vectors is said to be linearly dependent. The linear dependence relation is written using our solution vector multiplied by the respective vector from the given set: $-2 t * V_1 + t * V_2 + 0 * V_3 + 0 * V_4 = 0$. We can also conclude that any vectors with non-zero coefficients are linear combinations of each other. Therefore, $V_1$ and $V_2$ are a linear combination.

DC Biasing & AC Performance Analysis of BJT and FET Differential Amplifier Sub-circuits with Active Loads

Any op-amp worth its salt has a differential amplifier at its front end, and you’re nobody if you can’t design one yourself.  So, this article presents a general method for biasing and analyzing the performance characteristics of single-stage BJT and MOSFET differential amplifier circuits.  The following images show the general schematic for both kinds of differential amplifiers, often referred to as a differential input stage when used in designing op-amps.  Notice that these types of differential amplifiers use active loads to achieve wide swing and high gain.

Due to design processes and the nature of the devices involved, BJT circuits are “simpler” to analyze than their FET counterparts, whose circuits require a few extra steps when calculating performance parameters.  For this reason, this tutorial will begin by biasing and analyzing a BJT differential amplifier circuit, and then will move on to do the same for a FET differential amplifier.  But it should be noted that the procedures to analyze these types of differential amplifiers are virtually the same.

BJT Differential Amplifier

The first thing needed is to configure the DC biasing.  To accomplish this, a practical implementation of $I_{BIAS}$ must be developed.  A very popular method is to use a current mirror.   A simple current mirror is shown below:

BJT Current Mirror

It is easy to understand how a current mirror works.  Observe the equation governing the amount of collector current in a BJT, denoted $I_C$:

$I_C = I_S(e^{\frac{V{BE}}{nV_T}}-1)(1+\frac{V_{CB}}{V_A})$

where:

• $I_C$ is the collector current
• $I_S$ is the scale current
• $V_{BE}$ is the DC voltage across the base-emitter junction
• $V_T$ is the thermal voltage, typically 25 mV
• $n$ is the quality factor, typically between  1- 2 and is frequently assumed to be 1
• $V_{CB}$ is the voltage across the collector-base junction
• $V_A$ is the early voltage

Note: [This equation may look intimidating at first, but what is important to understand is that the point of designing “by hand” is to get close. One should aim simply to get a good estimation of such parameters as necessary bias current, gain, input impedance, etc.  In this way, computer simulations can analyze the hand-designed circuit in much closer detail, which greatly aids in the process of designing a real-life differential amplifier.  Knowing this, the equations to be used in this tutorial will be rough estimates, but are still invaluable when it comes to designing these types of circuits.]

By assuming a very large equivalent resistance, one can estimate that the collector current through any BJT can be described by:

$I_C \approx I_S e^{V_{BE}/V_T}$

What can be noticed here is that the only controllable variable in that equation is $V_{BE}$.  All the other terms in the equation are constants that depend on either the environment or the actual physical size of the device.  This means that for any two same-sized transistors, the currents through their collectors will be the same as long as the voltage across their base-emitter junctions is the same. By tying their bases and emitters together, we can mirror the currents between them!  In order to implement a successful current mirror, one transistor (here, $Q_5$) must have a current induced in it to mirror it to the differential amplifier’s current source (here, $Q_6$).  After adding this current mirror to our BJT differential amplifier, the resulting schematic is:

In order to properly bias this circuit, it is necessary to include $R_{BIAS}$.  Two things are accomplished by including $R_{BIAS}$ in our circuit.  One of them is that we can induce the current in $Q_5$, and thus, the current in $Q_6$.  The other important thing this resistor does is drop a majority of the available voltage across itself, so that $Q_5$ doesn’t have the entire voltage difference between the supplies across it!  To bias this circuit, the first thing one must do is determine what the desired magnitude of the current source will be.  This parameter depends on how you want the circuit to operate, and is usually a known value.  In this tutorial, we will assume we want an $I_{BIAS}$ of 1mA.  In order to determine the necessary size of $R_{BIAS}$, we analyze the loop that consists of:

$VCC \rightarrow I_{BIAS} \cdot R_{BIAS} \rightarrow V_{BE5} \rightarrow VEE$

Kirchoff’s Voltage Law (KVL) around this loop reveals:

$0 = -VCC + I_{BIAS} \cdot R_{BIAS} + V_{BE5} + VEE$

These kinds of circuits are typically supplied rails of $\pm 10$ to $15 V$.  So, this tutorial will assume:

$VCC = - VEE = 10 V$.

For a given technology, all of the BJT transistors are designed to have the same turn-on voltage. This tutorial will assume .7 V for each BJT.  That being the case, and rearranging the above equation, results in:

$R_{BIAS} = \frac{VCC - VEE - V_{BE5}}{I_{BIAS}} = \frac{10V - (-10V) - .7V}{1 mA} = 19.3 k\Omega$

By introducing a resistor $R_{BIAS}$ of $19.3k\Omega$ to the above schematic, the bias current is now established at 1 mA.  Due to symmetry, the currents through transistors $Q_1$ and $Q_2$ are each half of the bias current, described by:

$I_{C1} = I_{C2} = \frac{I_{BIAS}}{2} =\frac{1 mA}{2} = .5 mA$

Now that we know the collector currents through $Q_1$ and $Q_2$, characterizing the performance of this differential amplifier is a breeze.  Since the parameters we are interested in (gain, CMRR, etc) are small-signal parameters, the small-signal model of this circuit is needed.  To obtain this, a nice trick is to “cut the amplifier in half” (lengthwise, such that you only analyze the output side of the amplifier) to obtain:

Note: [even though the output signal is single-ended here, the output is still a result of the entire input signal, and not just half of it.  This is because the small-signal changes in the currents flowing through $Q_{2,4}$ are impeded from traveling down the branches controlled by current sources $Q_{2,6}$.  Also note that the connections between $R_{\pi}$ and the voltage-controlled current source (VCCS) indicate that the voltage that controls the VCCS is the voltage across $R_{\pi}$.  This is because the resistance in the emitter of these transistors has been omitted, due to its typically small value (10 to 25 $\Omega$).  In addition to this, $Q_6$ is assumed to be a small signal (AC) open-circuit.  The frequency response has also been omitted, and the amplifier is assumed to be unilateral.]

Differential Mode Gain

It is simple to see that $v_{out}$ (the small-signal output voltage) is equal to the current across the parallel combination of the resistors $r_{o2}$ and $r_{o1}$ multiplied by the size of the same parallel combination.  Since we know the value of the current through this combination is equal to the input voltage multiplied by $g_m$ (the transconductance parameter):

$v_{out} =- g_mv_{in} \cdot (r_{o2} \| r_{o4})$

The transconductance parameter is a ratio of output current to input voltage. It is described mathematically as:

$g_m = \frac{\partial i_c}{\partial v_{be}} = \frac{\partial i_{out}}{\partial v_{in}}$

and can be solved for thusly:

$\frac{\partial I_C}{\partial V_{BE}} = \frac{\partial (I_Se^{\frac{V_{BE}}{V_T}})}{\partial V_{BE}} = \frac{I_Se^{\frac{V_{BE}}{V_T}}}{V_T} = \frac {I_C}{V_T}$

In this example, $I_C$ is .5 mA and $V_T$ is 25 mV.  With these values, we compute:

$g_m = \frac{I_C}{V_T} = \frac{.5 mA}{25 mV} = 20 \frac{mA}{V}$

Now that the transconductance parameter is known, the only other values needed to compute the differential mode gain are $r_{o2}$ and $r_{o4}$$Q_2$ is an npn transistor, while $Q_4$ is a pnp transistor, so they will not have the same small-signal resistance, but the procedure to find these two values are nearly identical.  The following equation describes the small-signal output resistance of any BJT:

$r_{o_{n,p}} = \frac{|V_{A_{n,p}}| + V_{CE}}{I_C} \approx \frac{|V_{A_{n,p}}|}{I_C}$

The parameter $V_{A_{n,p}}$ is typically given, and in this tutorial:

$V_{A_n} = 130 V$

$V_{A_p} = 50 V$

Which would result in:

$r_{o2} = \frac{V_{A_n}}{I_C} = \frac{130 V}{.5 mA} = 260 k\Omega$ and

$r_{o4} = \frac{V_{A_p}}{I_C} = \frac{50 V}{.5 mA} = 100 k\Omega$

Now that the small-signal resistances are known, along with the transconductance parameter, the differential mode gain ($A_{v,DM}$) may be calculated:

$A_{v,DM} =- g_m \cdot (r_{o2} \| r_{o4}) =- 20 \frac{mA}{V} \cdot \frac{260 k\Omega \cdot 100 k\Omega}{(260+100) k\Omega} = -1444.4 \frac{v}{v}$

or, in decibels (dB):

$A_{v,DM(dB)} = 20log(|A_{v,DM}|) = 63.2 dB$

Differential Input Impedance

The differential input impedance of a differential amplifier is the impedance a “seen” by any “differential” signal. A “differential signal” is any and all signals that aren’t shared by $V_{in-}$ and $V_{in+}$.  For instance, if:

$V_{in-} = (2 + sin(2 \pi ft)) V$ and

$V_{in+} = (2 + cos(2 \pi ft)) V$

then the common mode signal and differential mode signals are:

$V_{in,CM} = 2V$ and

$V_{in,DM} = cos(2 \pi ft) - sin(2 \pi ft)$

To find the differential input impedance, begin by following the loop consisting of:

$V_{in-} \rightarrow V_{BE1} \rightarrow -V_{BE2} \rightarrow V_{in+}$, as illustrated below:

We see that, in the differential signal mode, the path to ground only consists of $r_{\pi}$ of each input transistor. Since this is the case, the differential mode input impedance of any BJT diff-amp may be expressed as (omitting emitter resistance and assuming $Q_{1,2}$ matched):

$R_{in,DM} = r_{\pi 1}+r_{\pi 2} = 2r_{\pi}$

where: $r_{\pi} = \frac{\beta}{g_m}$

$\beta = \frac{i_c}{i_b}$ (current gain factor)

A typical value for $\beta$ is 100, and knowing $g_m$ allows one to compute:

$R_{in,DM} = 2 \cdot \frac{\beta}{g_m} = 2 \cdot \frac{100}{20 \frac{mA}{V}} = 10 k \Omega$

So, for the BJT differential amplifier in this tutorial, the differential mode input impedance is:

$R_{in,DM} = 10 k \Omega$ (what impact will this have?)

Common Mode Gain

The CM gain ($A_{v,CM}$) is the “gain” that common mode signals “see,” or rather, is the attenuation applied to signals present on both differential inputs. A good op amp attempts to eliminate all common mode signals, but this is obviously not possible in the real world.  However, one may compute the common mode gain by “cutting the amplifier in half” by observing one of the loops in the following diagram.  The path differs from that of differential signals because common mode signals make it so that the two signal sources don’t “see” each other.  Notice:

We choose a loop and draw the small-signal model to obtain:

Similar to the output voltage of the differential mode small signal model, we can see that $V_o$ is the voltage across $r_{o4}$.  We also know the current running through this resistance, and may equate the output voltage to:

$V_o = - g_mv_{\pi} \cdot r_{o4}$

This time, though, $v_{in,CM}$ isn’t distributed entirely over the resistances at the base.  Instead, a fraction of the input common mode input signal is across the base-emitter junction.  Referring back to the small signal model, we see that the loop composed of:

$V_{in} \rightarrow v_{\pi2} \rightarrow (i_b + g_mv_{\pi 2}) \cdot 2 \cdot r_{o6}$

reveals that:

$V_{in} = v_{\pi2} + 2 \cdot r_{o6} \cdot (i_b + g_mv_{\pi2})$

but $i_b$ is negligible compared to the current supplied by the collector, so we say:

$V_{in} = v_{\pi2} + 2 \cdot r_{o6} \cdot g_mv_{\pi2} = v_{\pi2} \cdot (1 + 2r_{o6}g_m)$

which we use to solve for $v_{\pi2}$:

$v_{\pi2} =\frac{ v_{in}}{1 + 2 \cdot r_{o6} \cdot g_m}$

Which we then plug back into the equation for $V_o$:

$V_o = - g_mv_{\pi} \cdot r_{o4} = - \frac{r_{o4}g_m}{1+2 \cdot r_{o6} \cdot g_m} \cdot V_{in}$

From this we can solve directly for the common mode gain:

$A_{v,CM} = \frac{V_o}{V_{in}} = -\frac{r_{o4}g_m}{1 + 2r_{o6}g_m}$

Here, the common mode gain is:

$A_{v,CM} = -.3845 = -8.3 dB$

Common Mode Input Impedance

The common-mode input impedance is the impedance that common-mode input signals “see.” One can analyze the common mode input impedance ($R_{in_{CM}}$) by, again, “cutting the differential amplifier in half” and analyzing one side the resulting schematic, assuming a common mode signal.  This can be found by observing the figure 6, above.

Choosing one of these paths, we construct the corresponding small-signal model for common mode signals (assuming $r_{o2} = \infty$), which is shown in figure 7.  From this figure, deriving $R_{in,CM}$ is simple.  Notice the currents flowing in the loop that consists of:

$V_{in} \rightarrow i_b \cdot r_{\pi2} \rightarrow (i_b + g_mv_{\pi2}) \cdot 2 \cdot r_{o6}$

from this loop, one may compute:

$0 = -V_{in} + i_{b} \cdot r_{\pi 2} + (i_{b} + g_mv_{\pi 2}) \cdot 2\cdot r_{o6}$

which is used to find an equation for $V_{in}$

$V_{in} = i_{b} \cdot r_{\pi 2} + (i_{b} + g_mv_{\pi 2}) \cdot 2\cdot r_{o6}$

and since:

$g_mv_{\pi 2} = \beta \cdot i_{b}$

and $i_b = i_{in}$

So:

$V_{in} = i_b \cdot (r_{\pi 2} + 2 \cdot r_{o6} \cdot ( \beta + 1))$

which is the same as:

$V_{in} = i_{in} \cdot (r_{\pi 2} + 2 \cdot r_{o6} \cdot ( \beta + 1))$

which can be rearranged for:

$R_{in,CM} = \frac{V_{in}}{i_{in}} = r_{\pi 2} + 2 \cdot r_{o6} \cdot (\beta + 1)$

where: $r_{\pi} = \frac{\beta}{g_m}$

Which, in this tutorial, results in:

$R_{in,CM} = r_{\pi 2} + 2 \cdot r_{o6} \cdot (\beta + 1) = \frac{1 mA}{25 mV} + 2 \cdot \frac{130 V}{1 mA} \cdot (100 + 1) = 26.26 M\Omega$

Common Mode Rejection Ratio (CMRR)

The common mode rejection ratio (CMRR) is simply a ratio of the differential mode gain to the common mode gain, and is defined as:

$CMRR = \frac{A_{v,DM}}{A_{v,CM}}$

Here, the CMRR is:

$CMRR = \frac{-1444.4 v/v}{-.384 v/v} = 3761.46 = 71.5 dB$

Analysis of FET Differential Amplifiers

As stated before, the analysis of these performance parameters are done virtually the same for FET diff amps as they are for BJT diff amps.  There are, however, a few key differences.  For one, all BJT transistors are typically built to be the same size on a given IC device.  But for an IC device that uses FETs, this is not the case.  Each FET has an adjustable length and width that affects how much current it will pass for a given voltage-drop across the device.  In fact, observe the equation for the drain current in a FET:

$I_D = \frac{k_{n,p}}{2} \frac{W}{L} (|V_{GS}| - |V_{th_{n,p}}|)^2$

From this, the gate-source voltage is:

$V_{GS} = \sqrt{\frac{2I_DL}{kW}} - V_{th_{n,p}}$

where:

• $k$ is the process conductivity parameter, and is equal to:

$k = \mu_{n,p} C_{ox}$ , which is the electron mobility multiplied by the oxide capacitance

• $W, L$ are the width and length of the device, respectively
• $V_{GS}$ is the gate-to-source voltage
• $V_{th}$ is the threshold voltage of the FET

Analyzing BJTs in a circuit is more simple because all base-emitter voltages are assumed to be equal.  But this is not the case for mosfets, and one must analyze the above equation (or others) to find device voltages.  But there is the threshold voltage – the minimum gate-to-source voltage that will allow for any conduction whatsoever.  The threshold voltage is a result of the FET fabrication process, and is typically provided on datasheets for each FET gender.

For a differential amplifier composed of FETs to work, it is imperative that all the FETs be in saturation mode.  For a FET to be in saturation implies:

$|V_{DS}| \ge |V_{GS}| - |V_{th}|$

So this must be checked when analyzing these types of circuits.

Another important difference is the derivation of the transconductance parameter, $g_m$.  When analyzed for a BJT, it was defined as the ratio of the change in collector current to the change in the base-emitter voltage.  For a FET there is a similar procedure, as the transconductance is defined as the ratio of the change in drain current to the change in gate-source voltage.  Mathematically, the transconductance parameter is:

$g_m = \frac{\partial{i_D}}{\partial{v_{GS}}} =\frac{\partial( \frac{k_{n,p}}{2} \frac{W}{L} (|V_{GS}| - |V_{th_{n,p}}|)^2)}{\partial{v_{GS}}} = \sqrt{2I_Dk\frac{W}{L}}$

The last notable difference is the computation for a FET’s small-signal resistance.  The equation describing $r_{o}$ is:

$r_o = \frac{1}{\lambda I_D}$

where $\lambda$ is the channel-length modulation parameter.

From this little discussion, you should be able to apply the principles used to analyze the BJT differential amplifier to the analysis of a FET-based differential amplifier.  But, of course, if you would like to see a FET differential amplifier explained in more detail, do not hesitate to ask a question!

Credit & Acknowledgment

This post was created in March 2011 by Kansas State University Electrical Engineering student Safa Khamis.  A million thank yous extended to Safa for taking the time to document this important process for everyone else to learn from.  Please leave questions, comments, or ask a question in the questions section of the website.

$V_A_n = \frac{V_A_n}{I_C} = \frac{130 V}{.5 mA} = 260 k\Omega$ $V_A_n = \frac{V_A_n}{I_C} = \frac{130 V}{.5 mA} = 260 k\Omega$

The Convolution Integral Explained

Introduction to the convolution

Amongst the concepts that cause the most confusion to electrical engineering students, the Convolution Integral stands as a repeat offender.  As such, the point of this article is to explain what a convolution integral is, why engineers need it, and the math behind it.

In essence, the “convolution” of two functions (over the same variable, e.g. $f_1(t)$ and $f_2(t)$) is an operation that produces a separate third function that describes how the first function “modifies” the second one.  Conversely, the resulting function can be seen as how the second function “modifies” the first function.  Sometimes the result is used to describe how much the first two functions “have in common.”  In all honesty, the concept of the convolution of two functions is quite abstract, but the frequency at which it appears in nature grants its importance to scientists and engineers.  Ultimately the aim here is to identify its use to electrical engineers – so for now do not dwell solely on its mathematical significance.

A convolution of two functions is denoted with the operator “$*$“, and is written as:

Where $\tau$ is used as a “dummy variable.”  To aid in understanding this equation, observe the following graphic:

Before diving any further into the math, let us first discuss the relevance of this equation to the realm of electrical engineering.

Why is the convolution integral relevant?

Most electrical circuits are designed to be linear, time-invariant (LTI) systems.  Being “linear” implies that the magnitude of a circuit’s output signal is a scaled version of the input signal’s magnitude.  Further, an LTI system that is excited by two independent signal sources will output the sum of the scaled versions of each signal.  This is extended for an infinite number of independent signal sources, and gives rise to the concept of superposition.  Put in another way, if a function $x_1(t)$ causes an LTI system to output $y_1(t)$, then:

$a_1 \cdot x_1(t) \to a_1 \cdot y_1(t)$

Where $a_1$ is a multiplicative constant.  In addition to this, superposition allows us to say:

$a_1 \cdot x_1(t) + a_2 \cdot x_2(t) + \ldots \to a_1 \cdot y_1(t) + a_2 \cdot y_2(t) + \ldots$

Being a “time-invariant” system means it does not matter when the input signal is applied – a specific input signal will always result in the same output signal for a given LTI system.  Put mathematically, time-invariance can be expressed as:

$x_1(t) \to y_1(t) \Leftrightarrow x_1(t+\tau) \to y_1(t+\tau)$

where $\tau$ can be viewed as a time delay when dealing with signals through time (i.e. “time-domain signals”).  Though not directly, this concept also signifies that an output signal cannot contain frequency components not inherent in the input signal (causality).

The vast majority of circuits are LTI systems, each with a specific impulse response. The “impulse response” of a system is a system’s output when its input is fed with an impulse signal – a signal of infinitesimally short duration.  A real-world “impulse signal” would be something like a lightning bolt – or any form of ESD (electro-static dischage).   Basically, any voltage or current that spikes in magnitude for a relatively short period of time may be viewed as an impulse signal.  The impulse response of a circuit will always be a time-domain signal, and exists because no signal can propagate through a circuit in zero time; each individual electron involved can only move so quickly through each component.  Typically, real-world electronic LTI systems exhibit an impulse response that consists of an initial spike in magnitude, followed by an everlasting and ever-decreasing exponential relationship in signal magnitude.  The following image describes this graphically.

So, here’s the big deal: the fact that each LTI circuit has a specific impulse response function (here, referred to as $h(t)$) is very useful in predicting its behavior given a particular input signal (here, referred to as $x(t)$).  This is because the input signal itself may be viewed as an impulse train – a stream of continuous impulse functions, with infinitesimally short durations of time between each impulse.  This fact, along with superposition, allows one to find the output of an LTI system given an arbitrary input signal by summing the LTI system’s impulse response to each impulse function that make up the input signal. By allowing the time between each “impulse” of the input signal to go to zero, this approach can be used to determine the output time-domain signal of an LTI system for any time-domain input signal.  For example, the following graphic shows the output of an RC circuit when fed with a square pulse:

What is seen here is the integral of the impulse response and the input square wave as the square wave is stepped through time. In the above convolution equation, it is seen that the operation is done with respect to $\tau$, a dummy variable.  In reality, we are taking an input signal, flipping it vertically through the origin (not evident with a square wave), and determining what the integral is at each value of $\tau$, which here is delay through time. Since the output of any LTI system is non-causal (meaning it cannot exist until the signal that excites the output has been applied), we must mathematically step through time to see how each impulse signal of the input affects the LTI system’s impulse response – again, achieved by stepping through $\tau$ – the “time-delay” dummy variable.

A Convolution Example

To see how the convolution integral can be used to predict the output of an LTI circuit, observe the following example:

For an LTI system with an impulse response of $h(t) = e^{-2t}u(t)$, calculate the output, $y(t)$, given the input of:

$f(t) = e^{-t}u(t)$

The output of this system is found by solving:

$y(t) = h(t)*f(t) = \int_{0}^{\infty} h(\tau) \cdot f(t-\tau) d\tau$

We only integrate between 0 and +$\infty$ because, if we define $t = 0$ as the time that the input signal $f(t)$ is applied, then both $h(t)$ and $f(t)$ have zero magnitude at any time $t<0$.

From there, we calculate:

$y(t) = \int_{0}^{\infty} e^{-2\tau}u(\tau) \cdot e^{-\tau} d\tau= \int_{0}^{t}e^{-\tau} \cdot e^{-2(t-\tau)}d\tau$

Next, we can simplify and compute the integral:

$y(t) = \int_{0}^{t}e^{-\tau} \cdot e^{-2(t-\tau)}d\tau = e^{-2t} \int_{0}^{t}e^{\tau}d\tau = e^{-2t}(e^t-1) = e^{-t} - e^{-2t}$

Since $y(t) = 0$ for all $t < 0$, we can write the output $y(t)$ as:

$y(t) = (e^{-t}-e^{-2t})u(t)$

This result $y(t)$ describes the output function for an LTI system with an impulse response $h(t)$ when fed the input signal $f(t)$.

5 Steps to perform mathematical convolution

Often, one may wish to compute the convolution of two signals that can’t be described with one function of time alone.  For arbitrary signals, such as pulse trains or PCM signals, the convolution at any time t can be computed graphically.  For signals whose individual “sections” can be described mathematically, follow these steps to perform a convolution:

1.) Choose one of the two funtions ($h(\tau)$ or $f(\tau)$), and leave it fixed in $\tau$-space.

2.) Flip the other function vertically across the origin, so that it is time-inverted.

3.) Shift the inverted signal through the $\tau$ axis by $t_0$ seconds.  Choose to shift the signal to the first “section” of the fixed function that is described by the same equation.  The inverted signal (say, $f(- \tau)$), now shifted, represents $f(t_0 - \tau)$, which is basically a “freeze frame” of the output after the input signal has been fed to the LTI system for $t_0$ seconds.

4.) The integral of the two functions, after shifting the inverted function by $t_0$ seconds, is the value of the convolution integral (i.e. output signal) at $t = t_0$.

5.) Repeat this procedure through all “sections” of the function fixed in $\tau$-space.  By doing this, you can compute the value of the output at any time $t$!

Useful Properties

The following is a list of useful properties of the convolution integral that can help in developing an intuitive approach to solving problems:

1.) Commutative Property:

$f_1(t)*f_2(t) = f_2(t)*f_1(t)$

2.) Distributive Property:

$f_1(t)*[f_2(t)+f_3(t)] = f_1(t)*f_2(t) + f_1(t)*f_3(t)$

3.) Associative Property:

$f_1(t)*[f_2(t)*f_3(t)] = [f_1(t)*f_2(t)]*f_3(t)$

4.) Shift Property:

if $f_1(t)*f_2(t) = c(t)$

then $f_1(t-T_1)*f_2(t-T_2) = c(t-T_1-T_2)$

5.) Convolution with an Impulse results in the original function:

$f(t)* \delta (t) = f(t)$ where $\delta (t)$ is the unit impulse function

6.) Width Property:

The convolution of a signal of duration $T_1$ and a signal of duration $T_2$ will result in a signal of duration $T_3 = T_1 + T_2$

Convolution Table

Finally, here is a Convolution Table that can greatly reduce the difficulty in solving convolution integrals.

Thank you so much to Safa Khamis @ Kansas State University for taking the time to write this tutorial for Engineersphere and the electrical engineering community.

Finding the Inverse of a Matrix

Matrix manipulations and properties

Finding the inverse of a matrix is much more complex than finding the inverse of a number. All real numbers have an inverse (i.e. $6^{-1}= \frac{1}{6}$). However, not all matrices have an inverse. There are several characteristics that allow us to visibly determine whether a matrix has an inverse but we will only focus on one. A matrix must be square (i.e. 2×2, 3×3, etc.) to have an inverse. Performing the following manipulations will be a waste of time if a matrix is not square. It is also important to know the inverse matrix property. Using my example above, $6^{-1} * \frac{1}{6} = 1$ and similarly with matrices, $A * A^{-1} = In$ where In is the identity matrix (diagonal from top left to bottom right contains all 1’s, and everything else is 0) . We take advantage of this property when solving systems of matrices.

In words, the general algorithm for determining the existence of an inverse matrix is to manipulate the matrix into row reduced echelon form (rref). If the rref matrix is an identity matrix, then the inverse matrix exists. Hang on now, earlier I mentioned that there were other, visible characteristics that allow us to determine the existence of an inverse matrix, but now I’m asking you to perform a tedious process (without a calculator) with the same goal? Wouldn’t it be easier to first determine if finding the rref of the matrix is worthwhile? You’re right, except we are going to make a simple manipulation, and at the same time that we finish our rref process and determine that an inverse matrix exists, we will have found the inverse matrix! How do we do that? We will create an augmented matrix between our matrix in question, $A$, and the appropriate identity matrix where the size of matrix $A$ is equal to the size of matrix $In$. We will perform the same rref process to the augmented matrix $| A$ $In |$. If the portion of our augmented matrix previously belonging to matrix $A$ reduces to an identity matrix (indicating the existence of $A^{-1}$ ), then the portion previously belonging to the identity matrix, will equal $A^{-1}$.

Some matrix math

Now, for the math…

Suppose we are asked to find the inverse of the following matrix:

$\begin{bmatrix}1&3&3\\1&4&3\\1&3&4\end{bmatrix}$

First, we must set up the augmented matrix discussed above. Notice that I have simply placed the identity matrix (of the same size as $A$ ) on the right of matrix $A$.

$\begin{bmatrix}1&3&3&1&0&0\\1&4&3&0&1&0\\1&3&4&0&0&1\end{bmatrix}$

Finding the rref of an augmented matrix

Next, we will attempt to find the rref of the augmented matrix. If the portion of the augmented matrix previously belonging to $A$ yields an identity matrix, $A$ is invertible.

rref $\begin{bmatrix}1&3&3&1&0&0\\1&4&3&0&1&0\\1&3&4&0&0&1\end{bmatrix}$ = $\begin{bmatrix}1&0&0&7&-3&-3\\0&1&0&-1&1&0\\0&0&1&-1&0&1\end{bmatrix}$

Ok great! The left half of our augmented matrix reduced to an identity matrix. That means two things to us: the matrix has an inverse and we’ve already found the inverse. If you recall from above, $A^{-1}$ is the right half of the augmented matrix (after finding it’s rref, of course). So we can conclude:

$\begin{bmatrix}1&3&3\\1&4&3\\1&3&4\end{bmatrix}^{-1} = \begin{bmatrix}7&-3&-3\\-1&1&0\\-1&0&1\end{bmatrix}$

If our rref of the augmented matrix had yielded anything other than an identity matrix, we would conclude that $A^{-1}$ does not exist. This method will simply allow us to determine the existence of and entries to $A^{-1}$ for any size matrix.