UMTS Reference Architecture

Continuing on with wireless communications related subjects.  The architecture of a UMTS systems is the first thing to begin understanding if you want to undertake an education such as this.  It consists of three distinct components:

  1. The User Equipment (UE)
  2. The UMTS Terrestrial Radio Access Network (UTRAN)
  3. The Core Network (CN)

As with any wireless network out there, the main purpose is the provide access to services (data, voice, etc.)  The services network is divided into Public Switched Telephone Network (PSTN) that provides voice and special telephone related services (look it up on wiki), and the internet, which provides a wide range of packet data services such as email or access to the world wide web.  These things are probably all sounding very familiar to you, and they should, because they are critical in maintaining today’s society, and almost everyone living in the modern world uses at least one of these services sometimes in their daily lives.

The UMTS mobile, also known as the User Equipment (UE), interfaces with the UTRAN via the UMTS physical layer radio interface.  In addition to radio access, the UE provides the subscriber with access to services and profile information.  For example, the cell phone you carry in your pocket is the UE (user equipment) that interfaces with the cell phone towers that companies like Verizon, AT&T, and Sprint provide for you.

In UMTS, there are two Core Network (CN) configurations, the Circuit Switched CN (CS-CN) and Packet Switched CN (PS-CN).  The CS-CN is based on the GSM Public Land Mobile Network (PLMN) and provides functions such as connectivity to the PSTN, circuit telephony services such as voice, and supplementary services such as call forwarding, call waiting, etc.  The PS-CN is based on the GSM General Packet Radio System (GPRS) PLMN, which provides access to the Internet and other packet data services.

Both core networks connect to the UMTS Terrestrial Radio Access Network.  The UTRAN has two options for its air interface operations.  One option is Time Division Duplex (TDD), which makes use of a single 5 MHz carrier for communication between the UE and the UTRAN.  The other option is the Frequency Division Duplex (FDD), which provides full duplex operation using 5 MHz of spectrum in each direction to and from the UTRAN.

How do all of these components fit together?  Check out the image below.


UTRAN Components

The UTRAN consists of one or more Radio Network Subsystems (RNS).  An RNS consists of one Radio Network Controller (RNC) and several Nodes.  The radio network controller and the nodes are two essential components of UTRAN.  Apart from these two component types, UTRAN requires Operation Maintenance Centers (OMC) to perform Operation Administration and Maintenance (OA&M) functionality on the nodes and RNCs.  Yes, I know the acronyms are getting a little out of hand, but it is essential to learn them if you want to speak the language!  Engineers only speak to each other with acronyms and it is very very annoying indeed.

  • Radio Network Controller: The Radio Network Controller is the master of UTRAN.  It handles all aspects of radio resource management within the radio network subsystem.  The UMTS chose to use it instead of the base station controller in order to stress the independence of UTRAN from the Core Networks (CN).  It interfaces with the core network components such as the Mobile Switching Center (MSC) and Service GPRS Support Node (SGSN) to route signaling and traffic from the User Equipment (UE).  The RNC also interfaces with other RNC’s within UTRAN to provide it with wide mobility (very important!)
  • Nodes: Within this network, a node is the radio transmission and reception unit within UTRAN (remember above, I explained that the UE is your cell phone you carry in your pocket).  It handles radio transmission and reception for multiple cells within a coverage area.  So if you think about the amount of area that a certain cell tower (cell) can transmit its signal with the proper quality of service (QoS), you can get a mental grasp on multiple cells being in one coverage area, say if the towers are close together.  The node implements CDMA-Specific functionality such as encoding, interleaving, spreading, scrambling & modulation.  The nodes are also what used to be known as the Base Transceiver Subsystems (BTS) in second generation systems.

Digital Coding Techniques

Hey!  This article has a simple purpose, to teach you basic functionality and terminology related to modern UMTS-CDMA digital coding techniques.  One swift read of this information-packed kick in the face will leave you trembling at the knees with curiosity for more.  Well, maybe not.  But the first step in the UMTS-CDMA system applies error correction techniques so that any errors at the receiver can be corrected.  Various techniques can be employed to prevent errors at the transmitter.  You have probably studied one or more of these in your days if your an electrical or computer engineering student, professor, or professional.  One of these techniques is the use of Forward Error Codes, which are applied to the data before it is transmitted via the physical layer.

It is known that wireless is an inherently error-prone medium in which to operate our delicate signals.  Therefore, many error correction techniques are employed.  In the UMTS-CDMA systems, due to the large bandwidth available, a variety of coding techniques are employed.  The following three error correcting methods come to mind:

Convolutional Encoding

Convolutional encoding provides the ability to correct errors at the receiver.  So, the errors are removed from the signal by the receiver via convolutional encoding.  As a result, lower transmission power is required, which can result in more errors.  Some amount of errors can be tolerated since they can be recovered through convolutional encoding.  The convolutional encoder encodes input data bits int output symbols.  The data bits are entered into the first register at each clock cycle and the data bit in the last register is dumped out.  Data bits are tapped at various positions and XORed to provide encoded bits.  It is typically used for voice and low data rate applications.  Here are the main points to keep in mind:

  • Provides the ability to detect and correct errors at the receiver.
  • 10^(-3)  BER, typically used for voice and low data rates.
  • Uses history of bits to recover from errors.

Turbo Encoding

Turbo codes are a new class of error correction codes used in digital comm systems.  Turbo codes have been shown to perform better for high-rate data services (which is what we crave for) with stringent error rate requirements on the order of 10-6 Bit Error Rate (BER).  The turbo encoder consists of two constituent convolutional encoders.  Both constituent encoders use and code the same data.  The first one is fed data in the same order as the input data.  The second encoder uses a permuted form of the input data and the permuting is accomplished by the use of an interleaver, which will be discussed in detail in the following post(s).  Again, main points:

  • 10^(-6)  BER, suitable for high data rates.
  • Uses convolutional encoders in parallel to increase reliability.
  • Increased delays but better error correction capabilities.

Block Interleaving

Block interleaving protects data against fading and prevents bursty errors (so imagine a sudden burst in the amplitude of a received signal, think that might saturate you and ruin your signal? You bet.)  This is accomplished by providing time diversity, where the bits are separated in time before transmission over the air.  This is typically used with FEC codes, since FEC codes are not well suited to handle these bursty errors.

  • Method to shuffle bits to prevent errors during deep fade.
  • Provides time diversity.

All of these techniques will be described in detail, separately, in the following three articles.

The Evolution of 3G Wireless Technologies

There are several different types of 3G wireless technologies that are defined and planned to be up and working today.  There are also several that are on their way.  These are successors of course, to the previous 2G technologies that dominated the airwaves.


CDMA2000 is the successor to IS-95 systems.  CDMA2000 provides a definition for two different options for 3G technologies.  IT Differs in the amount of the frequency spectrum that is used.  The Spreading Rate (SR1) operates in the 1.25 MHz band and is known as a 1x system.  Another proposal exists also which is referred to as 1xEV-DO.  The 1xEV-DO (1x Evolution for Data Optimized) solution is a data-only solution that enables a bandwidth of 2Mbps without any mechanism for voice.  This is the type of data rate that we are all familiar with, the 3G 2Mbps speed of data connection.

The Universal Mobile Telecommunications System (UMTS)

The Universal Mobile Telecommunications System (UMTS) is a successor to GSM/GPRS systems.  There are also two options for the UMTS networks.  The Frequency Division Duplex (FDD) option uses spectrum bands which are paired together.  For example, two different 5 MHz bands are used for uplink and downlink.  The Time Division Duplex (TDD) option uses an unpaired band.  In other words, the same 5 MHz band is shared between uplink and downlink for TDD.

Universal Wireless Consortium for IS-136 systems

The UWC-136 (Universal Wireless Consortium for IS-136 systems) was originally considered to be the evolution for IS-136 systems.  However, the IS-136 system operators eventually decided to follow the path of CDMA2000 or UMTS.

Why did we need 3G Technology?

Back in the late 1990’s, when most of the readers out there were still playing in the sandbox, the International Telecommunication Union (ITU) set the requirements for the next generation of wireless networks (that is why they are called Third Generation (3G)).  One of the many many requirements is to reach peak data rates of at least 2 Mbps.  This is more relvant to the Downlink since the majority of traffic comes from the server to the client in the Internet World.

To meet this new high speed requirement, the 2nd generation wireless networks came up with several different evolutions before eventually being replaced.  The GSM evolution includes GPRS and EDGE, which provide packet data services and represent intermediate solutions until a UMTS Release 99 System is deployed.  The 1xEV-DO is one possible evolution path from 1xRTT, and HSDPA is a Release 5 feature of UMTS.

So how did UMTS Evolve?

UMTS is the network of choice these days.  Yes, UMTS is 3G…If you haven’t caught that yet.  For those nerds out there that are curious, the evolution of UMTS has progressed over the years in the following fashion:

UMTS Release 99

  • 2 Mbps theoretical peak packet data rates
  • 384 kbps (practical)

UMTS Release 5

  • HSDPA (14 Mbps downlink theoretical)
  • IMS (IP Multimedia Subsystem for multimedia)
  • UP UTRAN (for scalability and lower cost)

UMTS Release 6

  • HSUPA (up to 5.76 Mbps uplink)
  • MBMS (Multimedia Broadcast Multicast Service)

UMTS Release 7

  • Multiple Input Multiple Output (MIMO) Antenna Systems

The Fourier Integral / Transform Explained

Magic is real, and it all comes from the Fourier Integral.  But one doesn’t become a wizard without a little reading first – so, the purpose of this article is to explain the Fourier Integral theoretically and mathematically.

Before reading any further, it is important to first understand this: in mathematics, there is a rule that states that any periodic function of time may be “reconstructed” exactly from the summation of an infinite series of harmonic sine-waves.  The generalized theory itself is referred to as a “Fourier Series.”  For use with arbitrary electronic time-domain signals of period T_o, it may be expressed as:

f(t) = a_0 + \displaystyle\sum_{n=1}^{\infty}a_ncos(nw_ot) + b_nsin(nw_ot)

over the range:

t_0 \le t \le t_0 + T_0


a_0 is the magnitude of the 0th harmonic

a_n represents the magnitude of the nth harmonic of cosine wave components

b_n represents the magnitude of the nth harmonic of sine wave components

w_o is the fundamental frequency

t is the variable that represents instances in time

n is the variable that represents the specific harmonic, and is always an integer

This monumental discovery was first announced on December 21, 1807 by historic gentleman Baron Jean-Baptiste-Joseph Fourier.

Joseph Fourier

In order to go from the Fourier Integral to the Fourier Transform, it is necessary to express the previous Fourier Series as a series of ever-lasting exponential functions.  Using an orthogonal basis set of signals described by e^{jnw_ot} of magnitude D_n, we now write the Fourier Series as:

f(t) = \displaystyle\sum_{n=- \infty}^{\infty}D_ne^{jnw_ot}

where j is \sqrt{-1}.

What is the Fourier Integral?

The Fourier Integral, also referred to as Fourier Transform for electronic signals, is a mathematical method of turning any arbitrary function of time into a corresponding function of frequency.  A signal, when transformed into a “function of frequency”, essentially becomes a function that expresses the relative magnitudes of each harmonic of a Fourier Series that would be summed to recreate the original time-domain signal. To see this, observe the following figures:

Figure 1. A Square Wave Pulse, in time

In order to rebuild a square wave with sines and cosines only, it is necessary to determine the magnitudes of each harmonic used in the Fourier Series, or rather, the Fourier Integral (for continuous time-domain signals).  The relative magnitudes of these needed harmonics can be displayed graphically as a function of frequency (widely known as a signal’s frequency spectrum):

Figure 2. The Fourier Integral, aka Fourier Transform, of a square pulse is a Sinc function.  The Sinc function is also known as the Frequency Spectrum of a Square Pulse.

Though the recreation of a signal using an infinite series of sines and cosines is impossible to achieve in the lab, one may get very close.  Close enough that the most advanced lab equipment wouldn’t be able to calculate the error due to tolerance specifications.  This allows engineers to use Fourier Analysis to work with time-domain signals, such as radio signals, television signals, satellite signals and just about any signal you can think of.  By viewing a signal according to what frequency components are contained within it, electrical engineers may concern themselves with magnitude changes in frequency only, and may no longer worry about the signal’s magnitude-changes through time.  Not only is this a very practical concept when working in the lab, it also greatly simplifies the mathematics behind signal conditioning in general.  In fact, the entirety of the Communications industry owes its success to the Fourier Transform for not only antenna design, but a plethora of other applications.

The math behind the Fourier Transform

The derivations that follow have been summarized from Chapter 4 of the textbook “Signal Processing and Linear Systems” by B.P. Lathi, a fine book for students of Communication Systems.

We begin by considering some arbitrary, aperiodic time-domain signal.  An example of this kind of wave would be the output of a microphone after a man speaks a few words into it.  For the actual signal generated by the changes in voltage as the man spoke, we can use Fourier Analysis to describe it as a summation of exponential functions if we instead desire to reconstruct a periodic signal composed of the same voice signal repeating every T_0 seconds.  For an accurate description, it is important that T_0 is long enough such that the repeating arbitrary signals do not overlap.  However, if we let T_0 approach \infty, then this “periodic” signal is simply just the voice signal (or, any general arbitrary function) in time we wanted to describe initially.  Mathematically, we express:

\displaystyle\lim_{T_0\to\infty}f_{T_0}(t) = f(t)

where f(t) is the time-domain function we wish to apply the Fourier Transform on (here, the arbitrary “voice” signal).  For the above equation to be true, f_{T_0} is equal to:

f_{T_0}(t) = \displaystyle\sum_{n=- \infty}^{\infty}D_ne^{jnw_ot}


D_n = \frac{1}{T_0} \int^\frac{T_0}{2}_\frac{-T_0}{2} f_{T_0}(t)e^{-jnw_ot}dt

w_o = \frac{2\pi}{T_o}

It is important to note here that in practice, the shape (aka “envelope”) of a signal’s frequency spectrum is what is of main interest, and the magnitude of the components within the spectrum comes secondary.  This is because amplifiers and other signal-conditioning circuits may be built to alter the magnitude in any way one wishes, and will not affect signal frequencies (so long as the circuits are LTI systems).  Analyzing the envelope of a signal’s Fourier Transform allows one to use intuitive and mathematically-simplified approaches to signal-processing in general, which we shall see later.  For this reason (and also as T_0 approaches \infty) let:

F(w) = \int^{\infty}_{-\infty} f(t)e^{-jw_ot}dt

Notice that F(w) is simply D_n without the constant multiplier \frac{1}{T_0}, such that:

D_n = \frac{1}{T_0}F(nw_o)

which implies that f_{T_0}(t) may be written:

f_{T_0}(t) = \displaystyle\sum_{n=- \infty}^{\infty}\frac{F(nw_o)}{T_0}e^{jnw_ot}

Observation of this fact reveals insight: The shorter the period, T_0, the larger the magnitude of the coefficients.  But, on the other hand, as T_0 \rightarrow \infty, the magnitudes of every frequency component approaches 0 – which is why engineers choose to analyze spectrum envelopes.  So, instead of visualizing absolute frequency magnitudes, instead consider that the frequency spectrum simply expresses the magnitude-density per unit of bandwidth, aka Hz. And since:

T_0 = \frac{2\pi}{w_o}


w_o = \frac{2\pi}{T_0}


\Delta w_o = \frac{2\pi}{\Delta T_0}


f_{T_0}(t) = \displaystyle\sum_{n=- \infty}^{\infty}\frac{F(n\Delta w_o)\Delta w_o}{2\pi}e^{jn\Delta w_ot}

In the limit as T_0 \rightarrow \infty we see:

f(t) = \displaystyle\lim_{T_0\to\infty}f_{T_0}(t) = \frac{1}{2\pi} \int^{\infty}_{-\infty}F(w)e^{jwt}dw

which is referred to as the Fourier IntegralF(w) is referred to as the Fourier Transform of the original aperiodic function f(t), and we express this concept as:

f(t) \Leftrightarrow F(w)

A fourier transform example

This example is from the same textbook as the previous derivation, and can be found on page 239.

Find the Fourier Transform of: e^{-at}u(t) where a is an arbitrary constant.

To do this, we apply the Fourier Integral to the function e^{-at}u(t) as follows:

F(w) = \int^{\infty}_{-\infty}e^{-at}u(t)e^{-jwt}dt

Because of the u(t) factor, we only integrate from 0 \rightarrow \infty.  We simplify for:

F(w) = \int^{\infty}_{0} e^{-(a+jw)t}dt = \frac{-1}{a+jw}e^{-(a+jw)t}\mid^{\infty}_{0}

Also, we know that |e^{-jwt}| =Re[e^{-jwt}] = 1.  So, for a > 0, as t \rightarrow \infty:

e^{-(a+jw)t} = e^{-at}e^{-jwt} = 0


F(w) = \frac{-1}{a+jw}e^{-(a+jw)t}\mid^{\infty}_{0} = 0 - \frac{-1}{a+jw}e^{-(a+jw)0} = \frac{1}{a+jw}

for: a> 0

Useful Fourier Transform Properties

The relationship between f(t) and F(w) exhibit beautiful symmetry that help one to develop an intuitive approach to signal analysis.  Among all the concepts within electrical engineering, the properties between a time-domain function and its Fourier transform are among the most important to understand.  Observe these following properties that apply for all f(t) \Leftrightarrow F(w):

1.) Fourier Transform: Gives an equation to solve for the time-domain function f(t) from F(w).

f(t) = \frac{1}{2\pi} \int^{\infty}_{-\infty}F(w)e^{jwt}dw

2.) Inverse Fourier Transform: Gives an equation to solve for the frequency-domain function F(w) from f(t).

F(w) = \frac{1}{2\pi} \int^{\infty}_{-\infty}f(t)e^{-jwt}dw

3.) Symmetry Property: For a given pair of a time-domain signal and its Fourier transform, we note that the time-domain envelope is different in shape when compared to the frequency-domain envelope.  However, switching the shape of the two functions with respect to domain (time or frequency), will result in the same envelopes except with different scaling coefficients.  For example, a square pulse through time has a frequency spectrum described by a sinc function, and a sinc function through time results in a frequency spectrum described by a square pulse.

F(t) \Leftrightarrow 2\pi f(-w)

4.) Scaling Property: Time-scaling a time-domain signal (by a constant a) will result in a magnitude-and-frequency-scaling of the signal’s corresponding frequency spectrum.  Also signifies that the longer a signal exists through time, the narrower the bandwidth (collection of frequency components needed to rebuild the signal) of its frequency spectrum.

f(at) \Leftrightarrow \frac{1}{|a|}F(\frac{w}{a})

5.) Time-Shifting Property: By time-shifting, or delaying/advancing, a time-domain signal results in a phase delay in each of the ever-lasting frequency-components needed to rebuild it.  The frequency spectrum is otherwise unchanged – only the phase of each component is shifted.

f(t-t_0) \Leftrightarrow F(w)e^{-jwt_0}

6.) Frequency-Shifting Property: Multiplying a time-domain signal by a sinusoidal signal of some frequency w_0, a method which begets amplitude and frequency modulation (AM/FM), results in the frequency spectrum remains unchanged except for a shift in frequency for each individual frequency component by w_0.

f(t)e^{jw_0t} \Leftrightarrow F(w-w_0)

Lastly, these tables (table 1, table 2) can greatly simplify Fourier analysis when used in signal processing.


The Evolution of Wireless Technologies

Cellular systems have come a long way since their introduction in the 1980s.  The evolution progressed from First Generation (1G) systems to Second Generation (2G) systems.  Now, Third Generation (3G) systems are being deployed.

1G systems introduced the cellular concept, in which multiple antenna sites are used to serve an area.  The coverage of a single antenna site is called a cell.  A cell can serve a certain number of users, and higher-system capacity can be achieved by creating more cells with smaller coverage areas.  One distinguishing factor of 1G systems is that they make use of analog radio transmissions, so user information, such as voice, is never digitized.  As such, they are best suited for voice communications, since data communications can be cumbersome.

The migration of 1G analog technologies toward 2G technologies began in the late 1980s and early 1990s.  The primary motivation was increased system capacity.  This was achieved by using more efficient digital radio techniques that enabled the transmission of digitized compressed speech signals.  These digital radio techniques also supported data services with data rates as high as 14,400 bits per second (14.4 kbps) in some systems.  2G data communication is typically done using circuit-switched techniques, which are not very efficient for sending packet data such as that sent on the Internet.  This inefficiency makes the use of wireless data more expensive f or the end user.

The next step in the evolution is from 2G to 3G, which started in the year 2000.  The new key feature of 3G systems is the support of high-speed data services with data rates as high as 2 million bits per second (2 Mbps).  Data can be transferred using packet-switching techniques rather than the circuit-switching approach.  Therefore, it is more efficient and less expensive.  This opens up the possibility of cost-effective Internet access, access to corporate intranets, and a host of multimedia services.

If you want to read more about the evolution of wireless networks and WCDMA radio networks in general, please stay tuned for the next several editions where I will go into details.

Upcoming including but not limited to:

  • Physical layer functions
  • W-CDMA Channels
  • Basic call setups
  • Data session setups
  • Service reconfigurations
  • UTRAN mobility management
  • Inter-system procedures
  • RF design & analysis of UMTS radio networks
  • The evolution of UMTS
  • Architectures