Printable World Maps

Here are some blank maps of different countries and continents around the world.  I’ve found them useful on several different occasions.  Enjoy!

Blank Map of The World

Blank Map of The United States

Blank Map of South America

Blank Map of Russia

Blank Map of North America

Blank Map of Middle America

Blank Map of Japan

Blank Map of Europe

Blank Map of China

Blank Map of Australia

Blank Map of Asia

Blank Map of Africa

History of Computer Technology

So this post is a little out of the ordinary, however I figured this nice rant about the history of computer technology may be useful to some readers out there.

The History of Computer Technology

The first computer built in America was the IAS computer. It was developed at the Institute for Advanced Study at Princeton under the direction of John Von Neumann between 1946-1950.

This obsolete machine’s architecture is presented not just for its historical significance, though that would be reason enough. Details about it are presented because many (most) modern computers (including the HCS08) trace their ancestry to the IAS machine. Those that do are referred to as von Neumann (or Princeton) architecture machines. The IAS architecture is relatively simple and is a convenient starting point for assembly language programming. An assembly language program for the IAS (though programs for it were written in machine language) would not look very different from modern load/store style machines. Also, a sense of the relative importance of concepts results from learning the IAS architecture. Programs written in a carefully selected subset of the HCS08 instruction set would look almost identical to an IAS assembly program (were we to create an assembler for that machine). We will then see how the complete HCS08 instruction set improves on the IAS instruction set and addressing scheme. The general design of the IAS machine is given below. The HCS08 is almost identical to it.

ias-machine-architectureAs originally designed (to cut costs the machine was built with less memory), the memory consisted of 4096 40-bit words. The number of words in computer memory systems (at least the maximum memory) always consists of a power of two. This is because the specific word in memory to be accessed at any given time is specified by a set of binary valued wires or lines called the address bus. The number of different locations in memory such a bus can specify is simply all the possible combinations of the two values possible on each of the lines. For an n-bit bus, that value is 2n. The address bus size for the IAS memory was therefore 12 providing for 212 = 4096 unique memory locations. The number of bits per word is up to the computer designer. Memory word sizes of 4, 8, 12, 16, 18, 24, 32, 36, 40, 48, 60, and 64 have all existed for some computer. Almost all computers made today use 8-bit memory words, although they all support data types that use several consecutive memory words together as a single value. Most desktop (or larger) computer systems support 8-bit, 16-bit, 32-bit, and 64-bit values. Some even support 128-bit values. Note, all of these sizes are multiples (in fact powers of 2 multiples) of the basic 8-bit word size so these values use 1, 2, 4, and 8 (some even 16) words, respectively. Whatever the memory size, anyone with some programming experience taking their first look at the memory system in a computer design, should think of the memory as an array with 2n elements. The address bus carries the value of an n-bit unsigned index into the memory array. The data bus returns (or supplies if a write instead of a read is issued) the m-bit memory word value of the specified element, where m is the word size (in bits) of the memory system.

The REGS component of the general design represents the register set used by the CPU (Central Processor Unit). Registers are fast special-purpose memory separate from the main memory. They support the fetching and execution of instructions stored in memory. The registers in the IAS design are given below:

ias-design

IAS Instructions and Data

The IAS computer embodied the concept of a stored-program computer. The main memory contained two main categories of information – instructions and data. It is the ability to place different sequences of instructions in the memory that makes the computer so useful – it is what makes it a general-purpose computing device rather than a special-purpose or single-function computer. Instead of being constrained to building computers that perform only predefined single tasks (or small set of tasks), we can build a computer that does different tasks at different times. Such a computer can be reconfigured (reprogrammed) at any time to perform a new or different task. Often an important task is not even conceived of at the time the computer is designed. For the IAS machine, it was important that the programs and data shared the same memory. That way the instructions themselves could be processed as other data values. Self-modifying programs were the only scheme that the IAS designers thought of for performing loops that worked on different array elements each time through the loop. Between iterations of the loop, the instructions that accessed the array element were loaded into the AC and a value added to them to change the address portion of the instruction to the next array element to be processed. The modified instruction was then stored back in its memory location for execution the next time through the loop. Computer designers have since come up with several methods of accomplishing this type of operation without modifying the instructions themselves. Today, self-modifying code is considered taboo, as it is prone to logic errors that are hard to discover, cannot be executed out of Read-Only Memory (ROM), and other deficiencies that are beyond this article.

IAS Instruction Sets

The IAS instructions consisted of 20 bits – a 12-bit address and an 8-bit operation code (opcode). The operation code part specified both an operation and a register. The instruction’s 12-bit address part usually specified an operand by giving the memory address of the memory word involved in the operation. Basic operations like load (copy a value from a memory word to the implied register), store (copy a value from the implied register to a memory word), add (add the value in a memory word to the value in the AC replacing the AC value with the sum) are referred to as memory-reference instructions and form the core instructions for both the IAS machine and the HCS08 microcontroller. When an instruction contains the actual full address of the operand, as the IAS memory-reference instructions did, the value is usually referred to as a direct address and those instructions are said to use the direct addressing mode. Unfortunately, Freescale’s terminology for the HCS08 refers to such instructions as extended addressing mode instructions, and uses direct addressing mode to mean something else. The IAS instruction set had only one other class of instructions which were called register-reference instructions. Such instructions were said to use the inherent addressing mode. The HCS08 also has many such instructions and also refers to them as inherent addressing mode instructions. It is almost an oxymoron to say these instructions have an addressing mode as they use no memory address at all. Such operations involved only the registers. An example of such an instruction for the IAS machine was to add the value in the MQ to the value in the AC replacing the value in the AC with the sum. At this point we should note that descriptions of computer instructions usually shorten the phrase “the value in the Register-name” to just “Register-name”. Thus we describe the above addition as adding MQ to AC. It is implied that we mean the value in or content of the registers, and that by adding “to” we are replacing the original value after it is used in the operation with the result of the operation. An example of a similar instruction in the HCS08 instruction set is the “mul” instruction. It means multiply X times A, where X and A are HCS08 registers, replacing the value in the X:A pair with the 16-bit product.

Normally, instructions are executed in the sequence they appear in memory with the instruction in Memory[i+1] being executed immediately following the execution of the instruction in Memory[i]. The CPU register that keeps track of where the next instruction is located is the PC. Its name come from the fact that it is an index into the Program and that its normal mode of being modified is to be incremented (so it Counts) after each instruction. On the IAS machine each instruction occupied only half a word so the PC was incremented after every other instruction. There was an additional Register bit that told whether the current instruction was in the first or last half of the word referenced by the PC. On the HCS08 instructions take from one to four memory words, so its PC may be incremented several times per instruction.

Branch (jump) Instruction

A different type of instruction from those that manipulate data is the branch (or jump) instruction. They are so named because they cause the CPU to deviate from the normal sequence of instructions. Instructions in this class change the value of the PC from its normal incrementing sequence by loading a new value into it in much the same way that the loading of other registers work. A big difference, however, is that branch and jump instructions usually come in both unconditional and conditional varieties. The unconditional ones always change the course of execution from the normal sequential order, the conditional ones, however, may change it or they may let the normal order continue. Which program path occurs usually depends on the outcome of some previous operation such as subtracting one value from another. The IAS’s conditional branch would change the normal sequence if the result of the most recent arithmetic operation were positive, otherwise it would let the normal sequence continue. Conditional branching is the feature that gives computers the power to perform complex algorithms. Without it they would only be able to perform simple formula-based calculations! The HCS08 has a rich assortment of conditional branch instructions. Conditional branching is the feature that gives computers the power to perform complex algorithms. Without it they would only be able to perform simple formula-based calculations! The HCS08 has a rich assortment of conditional branch instructions.

Arithmetic Instructions

Data values in the IAS machine consisted of signed binary fixed-point numbers 40 bits in size. The sign used one bit so 39 bits remained to hold the magnitude of the value. The 39 bits are equivalent to about 12 decimal digits. As is true today, these fixed-point numbers could be considered to be integers or to contain fractional parts depending on where the binary point was considered to be. The hardware that performs calculations on the data works for a wide variety of conventions. Addition and subtraction require only that both operands are considered to have the binary point in the same position, there is no restriction on where it is considered to be. Multiply and divide don’t have even that restriction, but one must understand where the binary point of the results must be considered to be. Experimentation with decimal numbers with varying number of digits following the decimal point will allow one to understand the outcomes, and formulate the rules for properly placing binary points for the various computer operations. It should be made perfectly clear that no bits are used to indicate where the binary point is located; its position only exists in the way the programmer intends to interpret the value represented by the bits in the data word. Addition and subtraction instructions in the HCS08 support 8-bit unsigned and 2’s complement (signed) values. The multiply supports only 8-bit by 8-bit unsigned values – producing a 16-bit unsigned result. The divide instruction supports only a 16-bit by 8-bit unsigned divide, that produces an 8-bit quotient and a 8-bit remainder. By dividing the remainder by the original dividend, the quotient’s precision can be extended another 8 bits. This process can be repeated to any desired precision.

Indirect Addressing

The first architectural improvement in computers that eliminated the need to have self-modifying code was indirect addressing. This addressing mode makes use of a pointer data type. A pointer is a word (or multiple words if a word is too small) that has a value used as an address to another word in the memory. To specify an address indirectly, the indirect addressing mode instruction contains the address of a pointer instead of the data. The CPU knows that for such instructions there must be two memory areas accessed. The first reads the pointer and then uses its value as the address to access (read or write) the actual operand. To change the memory location accessed by such an instruction, the instruction itself need not be modified, only the value of the pointer variable. Few computers still use this form of indirect addressing, but rather its cousin, register addressing or a variation of it. In register addressing, the pointer value must be loaded into an address register with one instruction, and then that address used with another instruction. Again, to modify the actual data location a register addressing mode instruction accesses, one need only modify the value in the address register. The HCS08 has two such address registers, H:X and SP, although the SP main use is as a stack pointer. A variety of addressing modes based on those two registers are available.

Modern desktop computers now have many registers that can be used as accumulators and/or address registers. It is not uncommon for a computer to have 32 32-bit general- purpose registers that can be used for either. The HCS08 has only a microprocessor as its computing core, so it has only a single 8-bit accumulator and a pair of 16-bit index (address) registers – no general-purpose registers. In order to program in machine language assembly language or you must first learn the architecture of the computer you plan to program. The architecture of a computer specifies the registers in the CPU, the main memory size, and the instruction set (including not only the operations it can perform, but also all the addressing modes supported).

We are ready to study the architecture of the HCS08 microcontroller’s microprocessor core. The general design is given below. It is only a slight modification of the IAS one shown earlier. The only difference is that the Input/Output (I/O) devices are memory-mapped devices. That is, you access them as though they were normal memory, with load and store instructions within the main memory address space.

HCS08-architectureThere are big differences with the details of the design. The register set is different, the memory size and shape are different, and the instruction set contains many more instructions. However, the basic functioning of the machines is very much the same. They both have the same load/store nature where most ALU operations must be performed on values in the CPU’s registers. They are both single address machines in that at most one of the operands of binary (two operand) operations (such as add) can come from main memory. (Actually the HCS08 has a very limited set of “move” instructions that transfer data from one memory location to another without going through a register.) The main memory can contain both instructions and data at virtually any location (even intermixed – although we will try not to do that). The CPU can operate on instructions as though they were data.

Memory Comparisons

Let’s check out the differences. The main memory of the IAS held 4096 40-bit values. The memory space of the HCS08 provides for 65536 8-bit values. Thus the HCS08 memory system uses a 16-bit address bus (216 = 65536) and an 8-bit data bus. The address bus carries information only one direction, to the memory system, but the data bus can carry information either to or from the memory (thus the double arrowheads on the data bus above). Not shown in the above diagram are control signals that tell the memory such things as whether a read or write is being performed. Each of the 65536 memory locations should be thought of as having an address and a contents (or value). The address is a 16-bit (4-hexadecimal-digit) number, while the contents is an 8-bit (2-hexadecimal digit) number. When we look at the instruction format we will find it is convenient that the value of an address can be stored exactly in two words of memory. In truth, that ability is the cause of the memory space being the size that it is.

At the risk of being too remedial, I will emphasize a possibly painfully obvious, but extremely important, concept. We often show the content of a section of memory by listing the memory address in one column and its content in another. Consider the example below. The left pair are in binary and the right in hexadecimal.

memory-addressing

First, note how much more compact the hexadecimal (we’ll shorten that name as well, and start calling it hex) notation is. From this point on we will use mostly hex notation for address and values.

The key point I want to make is that only the value part is stored anywhere. The address is a fleeting number generated one at a time as needed and then its value disappears. When a program is being executed the Program Counter (PC) provides that value for instruction fetch. After each use of the PC it is incremented so the previous address value does not exist anywhere any more. The new value will exist only while it is being used to read the next word of the program.
Because the addresses are consecutive over an area of interest we often will show only the lowest address with a set of data values. By convention know that each successive memory content value is at an address that is one greater than the previous content. Thus the above set of memory values could be given as:
020D: 5B D8 74 06 31
A colon is often used to separate the address from a sequence of data values for added clarity. Here is my last restatement of the (hopefully blatantly obvious) address/data relationship – whenever the contents of locations 020D-0211 are given, the address part can never change, only the content can.

Byte Conventions

The HCS08 instruction set supports one 16-bit load operation. As 16-bit values require 2 words of memory, and the instructions specify only a single address, a convention must be established as to which of the two memory addresses that contain the 16-bit value will be specified in the instruction. It must also be decided whether the most-significant byte (MSB) or the least-significant byte (LSB) should be stored in the lower-numbered address of the pair. For me, this last question has an obviously best answer. Place the MSB at lower-numbered addresses. This convention is known as big-endian memory organization. When consecutive address values are written horizontally, the MSB will be to the left. This follows the standard convention for decimal numbers where more-significant digits appear to the left of less-significant digits. This convention, that more-significant bytes of multi-byte value be stored at lower address values, will never be violated in this course. In every situation that the HCS08 hardware dictates how they will appear, that convention is adhered to. In every situation where the programmer must decide, I will require that this rule be followed as well. Some machine designs even specify that two-byte values must start on an even address. Such requirements are referred to as memory alignment restrictions, but the HCS08 has no such restriction. The HCS08 always uses the address of the MSB, which is the lower of the pair. We will adopt that same convention, using the lowest address as the “handle”, for any program-defined groups of bytes for any size group that we create.

Registers to know

The Registers that you may need to know the most about are the ones in the Programmer’s Model as shown on Page 86 of the MC9S08QG Data Sheet for HCS08 Microcontrollers that is available in pdf form on the web and distributed in booklet form in class compliments of Freescale. The A register is an 8-bit accumulator used primarily for 8-bit arithmetic and load/store style moving of data. The H:X and SP register are used primarily for addressing. They are 16-bit registers so can point to any location in the memory space. The stack pointer (SP) exists primarily to support stack operations built into the hardware – we will learn all about these operations later. It is a 16-bit register as the stack utilizes main memory, and a 16-bit value allows the stack data to be placed anywhere in the memory space. The program counter (PC) tells where in the memory the next instruction to be executed is located. It is a 16-bit register so that instructions can exist and be executed out of any location in the memory space. The condition code register (CCR) keeps machine state information – our main programming use of it will be to guide conditional branch execution along the two paths that such instructions can direct a program. It is an 8-bit register.

This was wordy and didn’t necessarily wrap up completely but we have learned a lot of new information!  I hope you enjoyed this article about the history of computer technology and how this introduces you to how a simple computer called a microcontroller works.

Frequency Response for MOSFET/BJT

The frequency response of a BJT or MOSFET can be found using nearly the exact same process, with the only variations being caused by a single resistor and simple naming conventions that differ between the two devices.

Before we start let’s think a little bit about what we’re doing:
Our goal is going to be to find the pole(s) of the circuit.
Okay? What is a pole and why do I care where it is?
A pole is a frequency at which the gain of the device rolls off. (remember that when it rolls off , it will be at the -3dB frequency with a slope of -20dB/decade)

We care because if the gain of a device rolls off at a certain frequency, then we won’t be able to amplify a signal above that frequency very well because the gain will be decreasing by 20dB/decade.

The procedure is nearly identical whether we are using a BJT of a MOSFET, but we will work each of them side by side just in case there might be any confusion, and we’ll follow these steps as we go through.  (we will also use some values that came from the output file when running a simulation of this circuit in Cadence (or PSPICE) )mosfet-amplifier

bjt-amplifier
1. Take a look at one of the circuits and see what you notice, how about the MOSFET.  This step is just to help us with our knowledge understanding of the circuit.
– At a glance it just looks just like another MOSFET right? Sure is, but let’s take a look at a few things just for kicks. Notice that it is using a bypass capacitor at the source so we don’t have to worry about (at when working with high frequency).  Since the capacitor bypasses to ground, you should notice that this is a common-source amplifier.  You could notice the Values for and and start to think about what the Gate voltage is and how that may affect the circuit.
2. We are talking about frequency response so that means we are probably going to want to draw the small signal equivalent circuit.
Remember that the capacitors and will act like short circuits at high frequencies so we will ignore them, but we will have to account for some of the capacitance internal to the device.

Both devices have internal capacitances that are very similar.  As you can see from the small signal models for a MOSFET (above) and BJT (below), the only significant difference is that the BJT has an additional resistance Rpi between the Base and Emitter.

Most of the analysis we will do is based on the small signal model. Note that small signal models are not typically used in PSPICE so this picture may look a bit odd, especially the controlled source but for our purpose it is good to have a visual reference. To start we will point out what everything is. Cgs is an internal capacitance betwemosfet-small-signal-model

en the gate and source. The

values for Cgs was similar to one the a PSPICE simulation may give.  CM1 and CM2 are Miller capacitances which we will find values for laterbjt-small-signal-model.  ro is a Norton equivalent resistance that makes the model more ideal.  And just pretend that the G2 looks more like a voltage controlled current source and that their gains are gm*Vgs and gm*Vpi. For the BJT CM1 and CM2 are both Miller capacitances, Cpi is similar to Cgs and Rpi the additional component used for BJTs but not MOSFETs. The other part should look familiar from the other figures.

ON TO THE ANALYSIS!!!

We will find the device gain, overall gain, equivalent input and output capacitances, and the input and output poles. The process for both is essentially the same.

Device Gain: This is the gain from the control source to the output so we are looking for Vout/Vgs (or Vout/Vpi for a BJT). We will ignore CM2 for this process. Notice the resistances ro, RD, and RL are in parallel. Vout should be given by that equivalent resistance times the current though it which is gm*Vgs from the control source. So the equation for device gain is,

   (MOSFET)

  (BJT)

Overall Gain: This will be the gain from the source (Vs) to the output (Vout). We already know what Vout/Vgs is so if we find Vgs/Vs, we can multiply them to get Vout/Vs = (Vout/Vgs) * (Vgs/Vs).  Vgs/Vs is a simple voltage divider. Hopefully you can see this from the small signal model (remember that we are ignoring the capacitors for now but they will play a part later).  The equations we will get for Vgs/Vs and the overall gain are.

  (MOSFET)

Overall Gain:   (MOSFET)

  (BJT)

Overall Gain:   (BJT)

Now we will find the input and output poles.  For this we will need to look at the capacitances and use a formula to find the Miller capacitances, CM1 and CM2.  Any explanation for the miller capacitance will have to wait for another post or check out your Electronics Book, Wikipedia, Google, etc. but we will need to use a couple of special equations.  Overall we will need to find the input resistance and input capacitance for the input pole and the output resistance and output capacitance for the output pole.

Each pole will be at a frequency w=1/RC where the R and C are the equivalent R and C at that point, so to find the input pole, we will need to find the input resistance and the input capacitance.  These are found by looking into the input (the left side of the small signal model).  The voltage source will  act like a short so we see Rs in parallel with R1//R2 for the MOSFET (the BJT will have Rpi in parallel also).  The input capacitance will be Cgs in parallel with CM1 (the BJT will be the same).  The output resistance and capacitance are found the same way only looking in from the output (the right side of the small signal model).

      (MOSFET or BJT)

So the input pole will be: (MOSFET)

  =  950                                     =

  =                                               =

=                                                                          =

(BJT)

and the output pole will be: (MOSFET)

(BJT)

=                                  =

=                                                 =

=                                        =

To Do:

finish input & ouput R, input C, Pole (& calculate answers)

BJT Circuit and Symbol Conventions

The following is an explanation of symbol conventions , voltage polarities and current directions for npn and pnp BJTs. The goal is to help understand these characteristics but not on the physical level of electrons and holes. The following figure shows practical operation of each BJT in the active mode.

pnp-and-npn-bjts

npn or pnp

When looking at a BJT, the easiest way to decide whether it is npn or pnp is to look at the emitter, which is always modeled as the arrow. If you remember that the arrow tail is always at a ‘p’ node and the tip is at an ‘n’ node, you can easily decide whether the BJT is npn or pnp. Remember that the collector and emitter are always either both ‘n’ or both ‘p’.

Determining Voltage Polarities

It is important to know which direction the voltage’s will appear positive when we begin using nodal analysis to solve BJT circuits. Typically, there will be a voltage drop of .7 V over the nodes that will be used in these calculations. Whether or is positive is decided by the type of BJT. The voltage polarities are flipped between pnp and npn BJTs. Obviously, the only difference in the symbols between the two types of BJTS is the arrow, which is the emitter. If we remember the tip of the arrow is the lower voltage, we are able to deduce that for an npn BJT and for pnp.

To be in the active mode, a BJT’s collector-emitter voltage must be above approximately .3 V. As above, this voltage polarity is reversed between npn and pnp BJTs. To determine, whether or should be positive, we can use our deduction of the base-emitter voltage polarity. The voltages, in active mode, drop from collector to base to emitter in npn BJTs and from emitter to base to collector in pnp BJTs. So, if we have figured out that we are using an npn BJT because was a positive .7V, we know that the base voltage is higher than the emitter voltage. From here we know the collector must be higher than the base, and therefore, higher than the emitter. We have just figured out that must be greater than the .3V to be working in active mode. Using the same logic, must be greater than .3V for a pnp BJT to remain in active mode.

Current Flow Directions

Current directions are very simple to figure out. Just use the arrow. The collector and emitter currents always go in the direction of the arrow in active mode. The base current is a little more tricky to figure out, but is also fairly obvious when using the arrow as a reference. As you can see in the above npn circuit, where the arrow is ‘pointing’ away from the base, the base current flows towards the BJT, in the direction the arrow is pointing. Oppositely in the pnp circuit, the base current flows away from the BJT, in the direction the arrow is pointing. There is a table of basic equations listed in my post titled “BJT Transistor Nodal Analysis” which would allow us to calculate each current using a different current, but using Kirchhoff’s Current Law, knowing two currents, we could calculate the third. For a npn BJT, and for a pnp BJT . Note that both of these equations evaluate to .

BJT Transistor Nodal Analysis

Basic BJT Equations:

It is also important to know that can be modeled as .

These equations are not very informative by themselves so a few examples are demonstrated below. In both examples we will assume is very large. What this means for our calculations is . Since we also assume that .

Finding missing voltages in a BJT circuit

Example 1. Solve for V3:

bjt-voltagesThere are several ways to find . The more “difficult” way is to first find the emitter current, , and then use Ohm’s Law. Since we know , we can find the collector current, , and then solve for .

The easier way to find is to recall that behaves like a diode. For this pnp BJT: .

We know that so

may not always be a very large number. Had that been the case here, we would have started by finding the collector current (since it’s voltage drop and resistance are given) and since anymore, we would use the formulas above to the find the base and collector current.

Finding BJT Bias Voltages and Currents

Example 2 Solve for V2 and I1:

bjt-bias-current

Here we will want to start by finding . also equals which approximately equals and this collector current will allow us to find .

Notice that was given as .  If this had not been given, we would have been able to find it because and .

Similar to the previous example, if was not a very large number. We would first find the emitter current and then use the equations in the table to find the other branch currents.

Note that both of these examples used pnp BJTs. The difference in an npn BJT is the base-emitter voltage is reveresed. You would use .

General Rule of Thumb

Most of these problems are very simple to solve. Typically is given and you will need to use Ohm’s Law to identify one of the currents. After one of the currents is found you will be able to solve for the other currents using the basic equations listed above. If one of the currents is not immediately obvious, the base-emitter voltage is likely needed. Most problems have you deduce the emitter voltage from the base, but it is easily possible to find the base voltage from the emitter voltage and then use that to find the base current.