UMTS Reference Architecture

Continuing on with wireless communications related subjects.  The architecture of a UMTS systems is the first thing to begin understanding if you want to undertake an education such as this.  It consists of three distinct components:

  1. The User Equipment (UE)
  2. The UMTS Terrestrial Radio Access Network (UTRAN)
  3. The Core Network (CN)

As with any wireless network out there, the main purpose is the provide access to services (data, voice, etc.)  The services network is divided into Public Switched Telephone Network (PSTN) that provides voice and special telephone related services (look it up on wiki), and the internet, which provides a wide range of packet data services such as email or access to the world wide web.  These things are probably all sounding very familiar to you, and they should, because they are critical in maintaining today’s society, and almost everyone living in the modern world uses at least one of these services sometimes in their daily lives.

The UMTS mobile, also known as the User Equipment (UE), interfaces with the UTRAN via the UMTS physical layer radio interface.  In addition to radio access, the UE provides the subscriber with access to services and profile information.  For example, the cell phone you carry in your pocket is the UE (user equipment) that interfaces with the cell phone towers that companies like Verizon, AT&T, and Sprint provide for you.

In UMTS, there are two Core Network (CN) configurations, the Circuit Switched CN (CS-CN) and Packet Switched CN (PS-CN).  The CS-CN is based on the GSM Public Land Mobile Network (PLMN) and provides functions such as connectivity to the PSTN, circuit telephony services such as voice, and supplementary services such as call forwarding, call waiting, etc.  The PS-CN is based on the GSM General Packet Radio System (GPRS) PLMN, which provides access to the Internet and other packet data services.

Both core networks connect to the UMTS Terrestrial Radio Access Network.  The UTRAN has two options for its air interface operations.  One option is Time Division Duplex (TDD), which makes use of a single 5 MHz carrier for communication between the UE and the UTRAN.  The other option is the Frequency Division Duplex (FDD), which provides full duplex operation using 5 MHz of spectrum in each direction to and from the UTRAN.  The following articles in the wireless communication systems section  focus on the operations and design aspects of the UTRAN in FDD mode.

How do all of these components fit together?  Check out the image below.
[adrotate banner=”1″]

umts-reference-architecture
[adrotate banner=”1″]

UTRAN Components

The UTRAN consists of one or more Radio Network Subsystems (RNS).  An RNS consists of one Radio Network Controller (RNC) and several Nodes.  The radio network controller and the nodes are two essential components of UTRAN.  Apart from these two component types, UTRAN requires Operation Maintenance Centers (OMC) to perform Operation Administration and Maintenance (OA&M) functionality on the nodes and RNCs.  Yes, I know the acronyms are getting a little out of hand, but it is essential to learn them if you want to speak the language!  Engineers only speak to each other with acronyms and it is very very annoying indeed.

  • Radio Network Controller: The Radio Network Controller is the master of UTRAN.  It handles all aspects of radio resource management within the radio network subsystem.  The UMTS chose to use it instead of the base station controller in order to stress the independence of UTRAN from the Core Networks (CN).  It interfaces with the core network components such as the Mobile Switching Center (MSC) and Service GPRS Support Node (SGSN) to route signaling and traffic from the User Equipment (UE).  The RNC also interfaces with other RNC’s within UTRAN to provide it with wide mobility (very important!)
  • Nodes: Within this network, a node is the radio transmission and reception unit within UTRAN (remember above, I explained that the UE is your cell phone you carry in your pocket).  It handles radio transmission and reception for multiple cells within a coverage area.  So if you think about the amount of area that a certain cell tower (cell) can transmit its signal with the proper quality of service (QoS), you can get a mental grasp on multiple cells being in one coverage area, say if the towers are close together.  The node implements CDMA-Specific functionality such as encoding, interleaving, spreading, scrambling & modulation.  The nodes are also what used to be known as the Base Transceiver Subsystems (BTS) in second generation systems.

 

Digital Coding Techniques

Hey!  This article has a simple purpose, to teach you basic functionality and terminology related to modern UMTS-CDMA digital coding techniques.  One swift read of this information-packed kick in the face will leave you trembling at the knees with curiosity for more.  Well, maybe not.  But the first step in the UMTS-CDMA system applies error correction techniques so that any errors at the receiver can be corrected.  Various techniques can be employed to prevent errors at the transmitter.  You have probably studied one or more of these in your days if your an electrical or computer engineering student, professor, or professional.  One of these techniques is the use of Forward Error Codes, which are applied to the data before it is transmitted via the physical layer.

It is known that wireless is an inherently error-prone medium in which to operate our delicate signals.  Therefore, many error correction techniques are employed.  In the UMTS-CDMA systems, due to the large bandwidth available, a variety of coding techniques are employed.  The following three error correcting methods come to mind:

Convolutional Encoding

Convolutional encoding provides the ability to correct errors at the receiver.  So, the errors are removed from the signal by the receiver via convolutional encoding.  As a result, lower transmission power is required, which can result in more errors.  Some amount of errors can be tolerated since they can be recovered through convolutional encoding.  The convolutional encoder encodes input data bits int output symbols.  The data bits are entered into the first register at each clock cycle and the data bit in the last register is dumped out.  Data bits are tapped at various positions and XORed to provide encoded bits.  It is typically used for voice and low data rate applications.  Here are the main points to keep in mind:

  • Provides the ability to detect and correct errors at the receiver.
  • 10^(-3)  BER, typically used for voice and low data rates.
  • Uses history of bits to recover from errors.

[adrotate banner=”1″]

Turbo Encoding

Turbo codes are a new class of error correction codes used in digital comm systems.  Turbo codes have been shown to perform better for high-rate data services (which is what we crave for) with stringent error rate requirements on the order of 10-6 Bit Error Rate (BER).  The turbo encoder consists of two constituent convolutional encoders.  Both constituent encoders use and code the same data.  The first one is fed data in the same order as the input data.  The second encoder uses a permuted form of the input data and the permuting is accomplished by the use of an interleaver, which will be discussed in detail in the following post(s).  Again, main points:

  • 10^(-6)  BER, suitable for high data rates.
  • Uses convolutional encoders in parallel to increase reliability.
  • Increased delays but better error correction capabilities.

[adrotate banner=”1″]

Block Interleaving

Block interleaving protects data against fading and prevents bursty errors (so imagine a sudden burst in the amplitude of a received signal, think that might saturate you and ruin your signal? You bet.)  This is accomplished by providing time diversity, where the bits are separated in time before transmission over the air.  This is typically used with FEC codes, since FEC codes are not well suited to handle these bursty errors.

  • Method to shuffle bits to prevent errors during deep fade.
  • Provides time diversity.

All of these techniques will be described in detail, separately, in the following three articles.

The Evolution of 3G Wireless Technologies

There are several different types of 3G wireless technologies that are defined and planned to be up and working today.  There are also several that are on their way.  These are successors of course, to the previous 2G technologies that dominated the airwaves.

CDMA2000

CDMA2000 is the successor to IS-95 systems.  CDMA2000 provides a definition for two different options for 3G technologies.  IT Differs in the amount of the frequency spectrum that is used.  The Spreading Rate (SR1) operates in the 1.25 MHz band and is known as a 1x system.  Another proposal exists also which is referred to as 1xEV-DO.  The 1xEV-DO (1x Evolution for Data Optimized) solution is a data-only solution that enables a bandwidth of 2Mbps without any mechanism for voice.  This is the type of data rate that we are all familiar with, the 3G 2Mbps speed of data connection.

The Universal Mobile Telecommunications System (UMTS)

The Universal Mobile Telecommunications System (UMTS) is a successor to GSM/GPRS systems.  There are also two options for the UMTS networks.  The Frequency Division Duplex (FDD) option uses spectrum bands which are paired together.  For example, two different 5 MHz bands are used for uplink and downlink.  The Time Division Duplex (TDD) option uses an unpaired band.  In other words, the same 5 MHz band is shared between uplink and downlink for TDD.

Universal Wireless Consortium for IS-136 systems

The UWC-136 (Universal Wireless Consortium for IS-136 systems) was originally considered to be the evolution for IS-136 systems.  However, the IS-136 system operators eventually decided to follow the path of CDMA2000 or UMTS.

Why did we need 3G Technology?

Back in the late 1990’s, when most of the readers out there were still playing in the sandbox, the International Telecommunication Union (ITU) set the requirements for the next generation of wireless networks (that is why they are called Third Generation (3G)).  One of the many many requirements is to reach peak data rates of at least 2 Mbps.  This is more relvant to the Downlink since the majority of traffic comes from the server to the client in the Internet World.

To meet this new high speed requirement, the 2nd generation wireless networks came up with several different evolutions before eventually being replaced.  The GSM evolution includes GPRS and EDGE, which provide packet data services and represent intermediate solutions until a UMTS Release 99 System is deployed.  The 1xEV-DO is one possible evolution path from 1xRTT, and HSDPA is a Release 5 feature of UMTS.

So how did UMTS Evolve?

UMTS is the network of choice these days.  Yes, UMTS is 3G…If you haven’t caught that yet.  For those nerds out there that are curious, the evolution of UMTS has progressed over the years in the following fashion:

UMTS Release 99

  • 2 Mbps theoretical peak packet data rates
  • 384 kbps (practical)

UMTS Release 5

  • HSDPA (14 Mbps downlink theoretical)
  • IMS (IP Multimedia Subsystem for multimedia)
  • UP UTRAN (for scalability and lower cost)

UMTS Release 6

  • HSUPA (up to 5.76 Mbps uplink)
  • MBMS (Multimedia Broadcast Multicast Service)

UMTS Release 7

  • Multiple Input Multiple Output (MIMO) Antenna Systems

 

 

The Evolution of Wireless Technologies

Cellular systems have come a long way since their introduction in the 1980s.  The evolution progressed from First Generation (1G) systems to Second Generation (2G) systems.  Now, Third Generation (3G) systems are being deployed.

1G systems introduced the cellular concept, in which multiple antenna sites are used to serve an area.  The coverage of a single antenna site is called a cell.  A cell can serve a certain number of users, and higher-system capacity can be achieved by creating more cells with smaller coverage areas.  One distinguishing factor of 1G systems is that they make use of analog radio transmissions, so user information, such as voice, is never digitized.  As such, they are best suited for voice communications, since data communications can be cumbersome.

The migration of 1G analog technologies toward 2G technologies began in the late 1980s and early 1990s.  The primary motivation was increased system capacity.  This was achieved by using more efficient digital radio techniques that enabled the transmission of digitized compressed speech signals.  These digital radio techniques also supported data services with data rates as high as 14,400 bits per second (14.4 kbps) in some systems.  2G data communication is typically done using circuit-switched techniques, which are not very efficient for sending packet data such as that sent on the Internet.  This inefficiency makes the use of wireless data more expensive f or the end user.

The next step in the evolution is from 2G to 3G, which started in the year 2000.  The new key feature of 3G systems is the support of high-speed data services with data rates as high as 2 million bits per second (2 Mbps).  Data can be transferred using packet-switching techniques rather than the circuit-switching approach.  Therefore, it is more efficient and less expensive.  This opens up the possibility of cost-effective Internet access, access to corporate intranets, and a host of multimedia services.

If you want to read more about the evolution of wireless networks and WCDMA radio networks in general, please stay tuned for the next several editions where I will go into details.

Upcoming including but not limited to:

  • Physical layer functions
  • W-CDMA Channels
  • Basic call setups
  • Data session setups
  • Service reconfigurations
  • UTRAN mobility management
  • Inter-system procedures
  • RF design & analysis of UMTS radio networks
  • The evolution of UMTS
  • Architectures