digital logic families

Families of logic gates

There are several different families of logic gates. Each family has its capabilities and limitations, its advantages and disadvantages. The following list describes the main logic families and their characteristics. You can follow the links to see the circuit construction of gates of each family.

·        Diode Logic (DL)

Diode logic gates use diodes to perform ANDand OR logic functions. Diodes have the property of easily passing an electrical current in one direction, but not the other. Thus, diodes can act as a logical switch. Diode logic gates are very simple and inexpensive, and can be used effectively in specific situations. However, they cannot be used extensively, as they tend to degrade digital signals rapidly. In addition, they cannot perform a NOT function, so their usefulness is quite limited.

·        Resistor-Transistor Logic (RTL)

 Resistor-transistor logic gates use Transistors to combine multiple input signals, which also amplify and invert the resulting combined signal. Often an additional transistor is included to re-invertthe output signal. This combination provides clean output signals and either inversion or non-inversion as needed.

RTL gates are almost as simple as DL gates, and remain inexpensive. They also are handy because both normal and inverted signals are often available. However, they do draw a significant amount of current from the power supply for each gate. Another limitation is that RTL gates cannot switch at the high speeds used by today’s computers, although they are still useful in slower applications. Although they are not designed for linear operation, RTL integrated circuits are sometimes used as inexpensive small- signal amplifiers, or as interface devices between linear and digital circuits.

·        Diode-Transistor Logic (DTL)

By letting diodes perform the logical AND orOR function and then amplifying the result with a transistor, we can avoid some of the limitations of RTL. DTL takes diode logic gates and adds a transistor to the output, in order to provide logic inversion and to restore the signal to full logic levels.

·        Transistor-Transistor Logic (TTL)

The physical construction of integrated circuits made it more effective to replace all the input diodes in a DTL gate with a transistor, built with multiple emitters. The result is transistor-transistor logic, which became the standard logic circuit in most applications for a number of years. As the state of the art improved, TTL integrated circuits were adapted slightly to handle a wider range of requirements, but their basic functions remained the same. These devices comprise the 7400 family of digital ICs.

·        Emitter-Coupled Logic (ECL)

Also known as Current Mode Logic (CML), ECL gates are specifically designed to operate at extremely high speeds, by avoiding the “lag” inherent when transistors are allowed to become saturated. Because of this, however, these gates demand substantial amounts of electrical current to operate correctly.

CMOS Logic One factor is common to all of the logic families we have listed above: they use significant amounts of electrical power. Many applications, especially portable, battery-powered ones, require that the use of power be absolutely minimized. To accomplish this, the CMOS (Complementary Metal-Oxide-Semiconductor) logic family was developed. This family uses enhancement-mode MOSFETs as its transistors, and is so designed that it requires almost no current to operate.

CMOS gates are, however, severely limited in their speed of operation. Nevertheless, they are highly useful and effective in a wide range of battery-powered applications.

Most logic families share a common characteristic: their inputs require a certain amount of current in order to operate correctly. CMOS gates work a bit differently, but still represent a capacitance that must be charged or discharged when the input changes state. The current required to drive any input must come from the output supplying the logic signal. Therefore, we need to know how much current an input requires, and how much current an output can reliably supply, in order to determine how many inputs may be connected to a single output.

However, making such calculations can be tedious, and can bog down logic circuit design. Therefore, we use a different technique. Rather than working constantly with actual currents, we determine the amount of current required to drive one standard input, and designate that as a standard load on any output. Now we can define the number of standard loads a given output can drive, and identify it that way. Unfortunately, some inputs for specialized circuits require more than the usual input current, and some gates, known as buffers, are deliberately designed to be able to drive more inputs than usual. For an easy way to define input current requirements and output drive capabilities, we define two new terms: fan-in and fan-out

·        Fan-in

Fan-in is a term that defines the maximum number of digital inputs that a single logic gate can accept. Most transistor- transistor logic ( TTL ) gates have one or two inputs, although some have more than two. A typical logic gate has a fan- in of 1 or 2.

In some digital systems, it is necessary for a single TTL logic gate to drive several devices with fan-in numbers greater than 1. If the total number of inputs a transistor-transistor logic (TTL) device must drive is greater than 10, a device called a buffer can be used between the TTL gate output and the inputs of the devices it must drive. A logical inverter (also called a NOT gate) can serve this function in most digital circuits.

·        Fan-out

Fan-out is a term that defines the maximum number of digital inputs that the output of a single logic gate can feed. Most transistor-transistor logic ( TTL ) gates can feed up to 10 other digital gates or devices. Thus, a typical TTL gate has a fan-out of 10.

In some digital systems, it is necessary for a single TTL logic gate to drive more than 10 other gates or devices. When this is the case, a device called a buffer can be used between the TTL gate and the multiple devices it must drive. A buffer of this type has a fan-out of 25 to 30. A logical inverter (also called a NOT gate) can serve this function in most digital circuits.

Remember, fan-in and fan-out apply directly only within a given logic family. If for any reason you need to interface between two different logic families, be careful to note and meet the drive requirements and limitations of both families, within the interface circuitry

sequential circuit

Sequential Circuit

The digital circuits discussed so far have been combinational, where the outputs are entirely dependent on the current inputs. Although every digital system is likely to have combinational circuits, most systems encountered in practice also include storage elements, which require that the system be described in terms of sequential logic. A block diagram of a sequential circuit is shown below.

It consists of a combinational circuit to which storage elements are connected to form a feedback path. The storage elements are devices capable of storing binary information. The binary information stored in these elements at any given time defines thestate of the sequential circuit at that time. The sequential circuit receives binary information from external inputs. These inputs, together with the present state of the storage elements, determine the binary value of the outputs. They also determine the condition for changing the state in the storage elements. From the block diagram we can see that the outputs in a sequential circuit are a function not only of the inputs, but also of the present state of the storage elements. The next state of the storage elements is also a function of external inputs and the present state. Thus, a sequential circuit is specified by a time sequence of inputs, outputs, and internal states. Combinational circuits are often faster than sequential circuits since the combinational circuits do not require memory elements where as the sequential circuits need memory devices to perform their operations in sequence. However, modern digital computers must have memories to function properly. Thus, sequential circuits are of prime importance in modern digital devices.

Types of Sequential circuits

 

Sequential circuits are classified in two main categories depending on timing of their signals.

1.      Synchronous or clocked sequential circuits 

2.      Asynchronous or un-clocked sequential circuits.

Synchronous or clocked sequential circuits 

 A sequential circuit whose behavior can be defined from the knowledge of its signals at discrete instants of time is referred to as a synchronous sequential circuit. In such circuits, the memory elements are affected only at discrete instants of time. The synchronization is obtained by a timing device, called a system clock (or a master-clock generator) which generates a periodic train of clock pulses. The outputs are affected only with the application of clock pulse. 

Asynchronous or un-clocked sequential circuits.

 A sequential circuit whose behavior depends upon the sequence in which the input signals change is referred to as an asynchronous sequential circuit. The output is affected whenever there is a change in the inputs. The commonly used memory elements in such circuits are time delay devices. These circuits may be regarded as combinational circuits with feed back. These circuits are faster than synchronous sequential circuits. However, in an asynchronous circuit, events are allowed to occur without any synchronization. In such a case, the system becomes unstable which results in difficulties. To have a sequential circuit, a storage device is required to know what has happened in the past. The basic unit of storage is the flip – flop.

 

FLIP – FLOPS

 Also called as LATCH. The simplest kind of sequential circuit is a memory cell that has only two states. It can be either a 0 or 1. Such two state sequential circuits are called flip-flops because they flip from one state to another and then flop back. A flip-flop is also known as bistable multivibrator, latch or toggle.

Types of Flip-Flops

 

Flip-Flops are of different types depending on how their inputs and clock pulses cause transition between two states. There are 4 basic types 

1. S – R (Set-Reset)/R-S flip – flop 

2. J-K flip-flop

3. D (delay)flip-flop

  4. T (trigger or Toggle)flip-flop 

combinational logic

5 Combinational Logic

 

Introduction

Logic circuit may be classified into two categories

1.      Combinational logic circuits 

2.      Sequential logic circuits 

combinational logic circuit contains logic gates only but does not contain storage elements. A sequential logic circuit contains storage elements in addition to logic gates. When logic gates are connected together to give a specified output for certain specified combination of input variables, with no storage involved, the resulting network is known as combinational logic circuit.  In combinational logic circuit the output level is at all times dependent on the combination of input level.

 

The combinational logic circuit with memory elements(s) is called sequential logic circuit. It consists of a combinational circuit to which memory elements are connected to form a feed back path. The memory elements are devices, capable of storing binary information within them.

The next state of the memory element(s) is also dependent on external input and the present state. 

Applications

Logic gates find wide applications in Calculators and computers, digital measuring techniques, digital processing of communications, musical instruments, games and domestic appliances etc, for decision making in automatic control of machines and various industrial processes and for building more complex devices such as binary counters etc.

 

DESIGN OF COMBINATIONAL CIRCUITS

 

The design of combinational circuits starts from the verbal outline of the problem and ends in a logic circuit diagram. The procedure involves the following steps:

1.      State the given problem completely and exactly

2.      Interpret the problem, and determine the available input variables and required output variables.

3.      Assign a letter symbol to each input and output variables.

4.      Design the truth table, which defines the required relations between inputs and outputs.

5.      Obtain the simplified Boolean expression for each output.

6.      Draw the logic circuit diagram to implement the Boolean expression. 

 

ARITHMETIC CIRCUITS

 

One essential function of most computers and calculators is the performance of arithmetic operations. The logic gates designed so far can be used to perform arithmetic operations such as addition, subtraction, multiplication and division in electronic calculators and digital instruments. Since these circuits are electronic, they are very fast. Typically an addition operation takes less than 1 µs. We will now study some of the basic arithmetic circuits.

 

HALF – ADDER

 

A Logic circuit used for the addition of two one bit numbers is referred to as ahalf-adder. From the verbal explanation of a half adder, we find that this circuit needs two binary inputs and two binary outputs. The input variables designate the augend and addend bits; the output variables produce the sum and carry. We assign the symbols A and B to the two inputs and S (for sum) and C (for carry) to the outputs. The truth table for the half-adder is shown below.

 

Inputs

Outputs

A

B

S

C

0

0

1

1

0

1

0

1

0

1

1

0

0

0

0

1

 

Here the C output is 1 only when both inputs are 1. The S output represents the least significant bit of the sum. The logic expression for the sum output can be obtained from the truth table. It can be written as a SOP expression by summing up the input combinations for which the sum is equal to 1. 

In the truth table, the sum output is 1 for AB and AB. Therefore, the expression for the sum is                     S = AB + AB = A  B.

Similarly, the logic expression for carry output can be written as a SOP expression by summing up the input combinations for which the carry is equal to 1.  In the truth table, the carry is 1 for AB. Therefore

                  C = AB 

This expression for C cannot be simplified. The sum output corresponds to a logic Ex-OR function while the carry output corresponds to an AND function. So the half-adder circuit can be implemented using Ex-OR and AND gate as shown below.

 

This circuit is called Half-Adder, because it cannot accept a CARRY-IN from previous additions.  This is the reason that half – adder circuit can be used for binary addition of lower most bits only.  For higher order columns we use a 3-input adder called full-adder

 

FULL – ADDER

 

A combinational logic circuit for adding thee bits. As seen, a half-adder has only two inputs and there is no provision to add carry coming from the lower bit order when multi bit addition is performed. For this purpose we use a logic circuit that can add three bits, the third bit is the carry from the lower column. This implies that we need a logic circuit with 3 inputs and 2 outputs. Such a circuit is called a full – adder. The truth table for the full-adder is as shown below. 

Inputs

Outputs

A

B

Cin

S

Cout

0

0

0

0

0

0

0

1

1

0

0

1

0

1

0

0

1

1

0

1

1

0

0

1

0

1

0

1

0

1

1

1

0

0

1

1

1

1

1

1

 

 As shown there are 8 possible input combinations for the three inputs and for each case the S and Cout values are listed. From the truth table, the logic expression for S can be written by summing up the input combinations for which the sum output is 1 as:

    S = A′B′Cin + A′BC′in + AB′C′in + ABCin 

       = A(BCin + BC′in) + A(B′C′in + BCin)

                  = A(BCin)+ A(BCin)

Let BCin = X

Now, S = AX + AX = A  X

Replacing X in the above expression we get 

    S = A  B  Cin 

Similarly the logic expression forCout can be written as 

 Cout  = A′BCin + AB′Cin + ABC′in + ABCin 

         = BCin  + ACin + AB (using the map shown)

From the simplified expressions of S and C the full adder Circuit can be implemented using two 2-input XOR gates, Three 2 –input AND gates and one 3-input OR gate a shown below fig (a). The logic symbol is also shown as fig (b). 

The logic symbol has two inputs A and B plus a third input Cin called the Carry-in and two outputs SUM and the Carry called Carry out, Cout going to the next higher column.. A full adder can be made by using two half adders and an OR gate as shown below.

Binary Adders 

 

·         2-bit Parallel Adder

As we have seen a single full-adder is capable of adding two one-bit numbers and an input CARRY. For addition of binary numbers with more than one bit, additional full adders will be required. When two binary numbers are added, each column generates a sum and a Carry (1 or 0) to the next higher-order adder, Column as shown for a two bit adder. It is to be noted that either a half-adder can be used for the LSB position or the CARRY input of a full-adder is made zero because there is no CARRY into the LSB bit position. In the figure, the LSB of the two numbers are represented by X0 and Y0. The next higher order bits are represented by X1and Y1

 

·         4-bit parallel adder 

This circuit shows a 4-bit binary parallel adder circuit. The LSBs in each number being added (operands) go into the right-most full-adder; the high order bits are applied to the successively higher-order adders, with the MSBs in each number being applied to the left most full adder. The CARRY output of each adder is connected to the CARRY input of the next higher order adder. The full adder circuit used in each position has three inputs: an X bit, a Y bit and a CIN bit and it produces two outputs: a SUM bit and CARRY OUT bit. For example the left most full-adder has inputs X0, Y0, and C0 and it produces output S0 and C1. This arrangement is repeated for as many positions as there are in the augend and addend.  Although this illustration is for 4-bit numbers, in modern computers the number usually ranges from 8 to 64 bits. The arrangement shown is called a parallel adder because all the bits of the augend and addend are present and are fed into the adder circuit simultaneously. This means that the additions in each position are taking place at the same time. This is different from how we add on paper, taking each position one at a time with LSB. Parallel addition is therefore extremely fast.

 

·         The Half – Subtractor

 

Logic circuit that subtracts Y (subtrahend) from X(minuend), where X and Y are 1-bit numbers, is known as a half-subtractor.

The operation of this logic circuit is based on the rules of binary subtraction given in the truth table reproduced on the basis of the subtraction process.

 

 

Inputs

Outputs

X

Y

D

B

0

0

1

1

0

1

0

1

0

1

1

0

0

1

0

0

The difference output in the third column has the same logic pattern as when X is XORedwith Y (same as in the case of sum). Hence an Ex-Or gate can be used to give difference of two bits.

 

·         4 – bit Adder – Subtractor

The 4 – bit parallel binary adder/subtractorcircuit shown below performs the operations of both addition and subtraction.

It has two 4 – bit inputs A3 A2 A1 A0 and B3B2 B1 B0the Sel (ADD/SUB) control line connected with the C0 of the LSB of the full-adder, is used  to perform the operations of addition and subtraction. The Ex-OR gates are used as controlled inverters. When Sel=0 the circuit is an adder and when Sel=1, the circuit becomes a subtractor. Each Ex-OR gate receives input Sel and one of the inputs of B. When Sel=0, we have B  0=B. The full-adders receive the value of B, the input carry = 0 and the circuit performs A plus B. WhenSel=1, we have B  1 =B and C0 = 1. The B inputs are all complemented and 1 is added through the input carry. The circuit performs the operation A plus the 2’s complement of B. (The exclusive – OR with output V is for detecting an overflow.)

 

MULTIPLEXERS

 

A multiplexer or data selector is a logic circuit that accepts several data inputs and allows only one of them at time to get through the output. In other words, the multiplexer works like the input selector of a home music system. Only one input is selected at a time, and the selected input is transmitted to the single output. While on the music system, the selection of the input is made manually, the multiplexer chooses its input based on a binary number, the address input. The basic multiplexer, then, has several data-input lines and a single output line. It has also data-select inputs or control signals that permit digital data on any of the inputs to be switched to the output line. A general multiplexer (MUX) with n – input ignals, m – data select inputs or control signals and one output signal is shown in the figure.

The multiplexer (MUX) acts like a digitally controlled multi position switch where the digital code applied to the SELECT inputs determines which data inputs will be switched to the output. The functional diagram of a 2 – to – 1 multiplexer is shown below:

 

Here the input signal I0 will be obtained at the out put when the select signal is equal to 0 and the signal I1 appears at the output when the select signal is 1.  So the MUX selects in this case one out of the 2 input data sources and transmits the selected data to a single output channel. This is called multiplexing. Now let us consider a multiplexer with 4 input signals, 2 control signals and one output signal.

·         8 – to – 1 Multiplexer   

It has 8 inputs and one output. In the logic symbols there are three select inputs S0, S1 and S2 which selects one of the eight inputs. The truth table of 8:1 multiplexer is shown below. 

 

 

 

Select Lines

Outputs

S2

S1

S0

Z

0

0

0

0

1

1

1

1

0

0

1

1

0

0

1

1

0

1

0

1

0

1

0

1

D0

D1

D2

D3

D4

D5

D6

D7

 

In this truth table, for each input combination, output is not specified as 1’s and 0’s. Multiplex being a data switch does not generate a data of its own, but it simply passes external input data from the selected input to the output. Figure below shows a logic diagram of  8 – to – 1 multiplexer.

 

·         Applications of Multiplexers

 

Multiplexer circuits find numerous applications in digital systems. Some of the fields where multiplexing find immense use are data selection, data routing, operation sequencing, parallel – to –serial conversion, wave form generation and logic function generation. 

 

 

DEMULTIPLEXERS

 

The word “de-multiplex” means one into many. A multiplexer takes several inputs and transmits one of them to the output as stated before. A de-multiplexer performs the reverse operation. That is it takes a single input and distributes it over several outputs.By applying control signals, the input signal can be steered to one of the output lines. A de-multiplexer with on input signal, m control signals and n output signals is shown below. The de-multiplexer is also called a Distributeror a serial-to-parallel converter.

 

 

 

 

·         1 – to – 4 De-multiplexer

 Let us consider a de-multiplexer with 1 input signal, 2 control signals and 4 output signals to explain, how input signal is transmitted to any of the outputs by control signals. This type of de-multiplexer is called one-to-four de-multiplexer because it has 1 input and 4 outputs.

 

DECODERS

 

 De-multiplexer without any input. A decoder decodes the coded input signal into an active output signal. A decoder is similar to a de-multiplexer, with one exception – there is no data input. Inputs for the decoder are the control signals. Most digital systems require the decoding of data. Decoding is necessary in applications such as data de-multiplexing, digital display, digital-toanalog converters and memory addressing.  A decoder is logic circuit that converts an n-bit binary input code (data) into 2n output lines, such that each output line will be activated for only one of the possible combinations of inputs. In a decoder, the number of output is greater than the number of inputs. It is important to note that if the number of inputs and outputs are equal in a digital system then it can be calledConverters

e.g. 2 – to – 4 Decoder , 4-to-16 Decoder

 

 

ENCODERS

 An Encoder is a digital circuit that performs the inverse operation of a decoder. Hence the opposite of the decoding process is called encoding. An encoder converts an active input signal into a coded output signal.

e.g. Decimal-to-BCD encoder, Octal-to-binary encoder

 

 

 

 

Simplifications of boolean functions

4.1 Introduction

You already know one method for simplifying Boolean expressions: Boolean algebra.

There are two serious limitations to using Boolean algebra though:

• There is no algorithm you can follow that is guaranteed to lead to the simplest form of the expression

• Given any intermediate result there is no way to tell if it is in fact the simplest form of the expression

In this tutorial you will learn an algorithmic procedure for finding the simplest two level form of a Boolean expression.

This method doesn’t give you the simplest form in any number of levels. However, the simplest two level form is usually what we want because as you add more levels the expression may get smaller but thepropagational delay will also increase.Propagational delay determines the speed of the circuit.

4.2Motivation

In essence the method we are about to discuss is a pictorial way to apply the distributive law to factor out commonsubexpressions. For example:

F = AB + AB’
F = A(B + B’)
F = A

Or,

F = ABC + ABC’ + AB’C + AB’C’
F = A (BC + BC’ + B’C + B’C’)
F = A(1)
F = A

Intuitively, if you can find two terms that are equivalent except that one variable is the complement of the matching variable in the other term you can factor out this variable. The second example above shows that it works for multiple variable subsets–as long as they are powers of two.

The two examples above make it look easy to remove subexpressions with Boolean algebra. For small examples it is easy. For larger expressions with 3 and more variables it because much harder.

In this lecture we will study a method calledKarnaugh maps you can use to quickly find the minimum standard form of a Boolean function of 4 or fewer variables. Later we will study an algorithmic method that works for functions of any number of variables. Thekarnaugh, or k-map, method is fast and best carried out by a human. The algorithm we will study later is tedious for humans but is easy to program using any high-level programming language.

4.3. K-Maps

A K-map shows the value of a function for every combination of input values just like a truth table, but a K-map spatially arranges the values so it is easy to find common terms that can be factored out. The image below shows the form of a 2,3 and 4 variable K-map.

The numbers inside of the boxes above refer to the corresponding row in the truth table for a function of the same number of variables. (You may want to use the truth table to the right to compare K-map entries and truth table entries.)

Row

A B C

Minterm

F

0

0 0 0

A’ B’ C’

0

1

0 0 1

A’ B’ C

0

2

0 1 0

A’ B C’

1

3

0 1 1

A’ B C

0

4

1 0 0

A B’ C’

1

5

1 0 1

A B’ C

1

6

1 1 0

A B C’

1

7

1 1 1

A B C

1

 

Notice how the columns are numbered on the 3 variable K-map and how both the rows and columns are numbered on the 4 variable K-map (00 01 11 10). This numbering guarantees that adjacent terms differ by only one term. For example, in the 3 variable K-map above the square with a 2 in it represents the minterm A’BC’ and the square with a 6 in it represents the minterm ABC’. If both of these terms appear as minterms in an expression we could factor out the A:

A’BC’ + ABC’ = (A’ + A)BC’ = BC’

 

Also notice that entries at the ends of the K-map differ by only one element with entries on the other side of the K-map. For example,

A’B’C’ & AB’C’

We start with a general example of how to use a K-map to simplify a function and then describe a more precise procedure.

Example 1. Use a K-map to simplify the following Boolean function:

F(A,B,C) = ∑m(2,4,5,6,7)

Since this is a function of 3 variables we first draw the outline for a 3-variable K-map. Since the function is given in terms of minterms we write the minterm number inside the box that represents that minterm. Next we put a 1 in each square where the function has the value 1. Finally, we circle groups of 1’s so that all 1’s are circled. We circle only groups that are powers of 2 and try to create circles as large as possible.

The example above gives the general idea. What follows is a more formal description of how to proceed.

The first step to understanding the formal procedure is understanding some terms.

Implicant – a single minterm or group ofminterms that can be combined together on the K-map. For example the implicants in the example on the right are A’B’C’, A’BC’, A’BC, ABC, A’C’, A’B, and BC.

Prime Implicant – Implicant that can not be combined with another one to remove a literal. For example: A’C’, A’B, BC.

Each product term in the minimum sum of products expression is a prime implicant.

Essential Prime Implicant – A prime imlpicantthat includes a minterm not covered by any other prime implicant. For example: A’C’ and BC.

Note, that with the example above if you’re not careful you could end up with an expression with too many prime implicants. For example, if you choose A’B first you would then have to choose A’C’ and BC to cover every implicant in the on-set. This observation is the motivation for the formal K-map procedure that follows.

To derive the minimized expression from a K-map:

1. Draw the K-map and put a 1 in each square that corresponds to a minterm of the function. You can draw the K-map from a truth table, Boolean expression, etc.

2. Find the prime implicants. To do this find the largest adjacent groups of 1’s. Groups must be “square” and the number of 1’s in a group must be a power of 2.

3. Find the essential prime implicants. An essential prime implicant is a prime implicantthat includes a 1 not covered by any other prime implicants.

4. Write down the minimized expression. First write down all essential primeimplicants. If there are any 1’s not covered by prime implicantscarefully select primeimplicants to cover the remaining 1’s. Note, you may have to try several selections to find the minimal form of the expression.

Example 2. Use a K-map to simplify the following Boolean function:

F(A,B,C,D) =∏M(0,1,2,4,9,11,15)

F(A,B,C,D) = ∑m(3,5,6,7,8,10,12,13,14)

 

·        Don’t Cares

Sometimes the output of a function for a specific input value is “don’t care”. For example, the output of a BCD increment by one circuit given the input 1010 would be XXXX, or don’t care because the input (1010)2 = (10)10 would never be seen. Don’t cares act like joker in a deck of playing cards–we can make them whatever we want. When you are simplifying a function using a K-map these don’t care values help you to form larger groups of 1’s which give you smaller prime implicants.

Example 3. Use a K-map to simplify the following Boolean function:

F(A,B,C,D) = ∑m(0,4,5,8,10,15) + d(2,7,9,13)  

F(A,B,C,D) =  C’A’B +DB + D’B’

 

 

 

Boolean algebra & logic gates

3.1 Boolean Algebra

One of the primary requirements when dealing with digital circuits is to find ways to make them as simple as possible. This constantly requires that complex logical expressions be reduced to simpler expressions that nevertheless produce the same results under all possible conditions. The simpler expression can then be implemented with a smaller, simpler circuit, which in turn saves the price of the unnecessary gates, reduces the number of gates needed, and reduces the power and the amount of space required by those gates.

One tool to reduce logical expressions is the mathematics of logical expressions, introduced by George Boole in 1854 and known today as Boolean Algebra. The rules of Boolean Algebra are simple and straight-forward, and can be applied to any logical expression. The resulting reduced expression can then be readily tested with a Truth Table, to verify that the reduction was valid.

 

Boolean algebra is an algebraic structure defined on a set of elements B, together with two binary operators(+, .) provided the following postulates are satisfied.

1. Closure with respect to operator ‘+‘and Closure with respect to operator ’.’ 

2. An identity element with respect to ‘+’ designated by 0: X+0= 0+X=X

An identity element with respect to ‘.’ designated by 1: X.1= 1.X=X

3. Commutative with respect to ‘+’: X+Y=Y+X

Commutative with respect to ‘.’: X.Y=Y.X

4. ‘.’ distributive over ‘+’: X.(Y+Z)=X.Y+X.Z

‘+’ distributive over ‘.’: X+(Y.Z)=(X+Y).(X+Z)

5. For every element x belonging to B, there exist an element x′  called the complement of x such that x. x′=0 and x+ x′=1

6. There exists at least two elements x,ybelonging to B such that x≠y 

 

The two valued Boolean algebra is defined on a set B={0,1} with two binary operators ‘+’ and ‘.’ 

x

y

x+y

0

0

0

0

1

1

1

0

1

1

1

1

 

x

y

x.y

0

0

0

0

1

0

1

0

0

1

1

1

 

x

x’

0

1

1

0

 

Closure.  from the tables, the result of each operation is either 0 or 1 and 1 ,0 belongs to B

Identity. From the truth table we see that 0 is the identity element for ‘+’ and 1 is the identity element for ‘.’

Commutative law is obvious from the symmetry of binary operators table.

Distributive Law. x.(y+z)=x.y+x.z

x

y

z

y+z

x.(y+z)

x.y

x.z

x.y+x.z

0

0

0

0

0

0

0

0

0

0

1

1

0

0

0

0

0

1

0

1

0

0

0

0

0

1

1

1

0

0

0

0

1

0

0

0

0

0

0

0

1

0

1

1

1

0

1

1

1

1

0

1

1

1

0

1

1

1

1

1

1

1

1

1

Distributive of ‘+’ over ‘.’ can be shown as in the truth table above

From the complement table we can see that x+ x′=1 i.e 1+0=1 and x. x′=0 i.e 1.0=0

Principle of duality of Boolean algebra

The principle of duality state that every algebraic expression which can be deduced from the postulates of Boolean algebra remains valid if the operators and the identity elements are interchanged.  This mean that the dual of an expression is obtained changing every AND(.) to OR(+), every OR(+) to AND(.) and all 1’s to 0’s and vice-versa.

 

Ø  Laws of Boolean Algebra

Postulate 2 : 

(a) 0 + A = A           (b) 1.A = A

Postulate 5 : 

(a) A + A′ =1            (b) A. A′=0

Theorem1 : Identity Law 

(a) A + A = A           (b) A . A = A 

Theorem2

(a) 1 + A = 1            (b) 0. A = 0

Theorem3: involution

 A′′=A

Postulate 3 : Commutative Law 

(a) A + B = B + A               (b) A. B = B. A 

Theorem4: Associate Law 

(a) (A + B) + C = A + (B + C)              (b) (A. B). C = A. (B. C)  

Postulate4: Distributive Law 

(a) A. (B + C) = A. B + A. C             (b) A + (B. C) = (A + B). (A + C) 

Theorem 5 : De Morgan’s Theorem 

(a)   (A+B)′= A′.B′              (b)  (A.B)′= A′+ B′

Theorem 6 : Absorption 

(a) A + A. B = A             (b) A. (A + B) = A 

Using the laws given above, complicated expressions can be simplified.

 

3.2 Binary Logic

 

Binary logic deals with variables that assume discrete values and with operators that assume logical meaning.

While each logical element or condition must always have a logic value of either “0” or “1”, we also need to have ways to combine different logical signals or conditions to provide a logical result.

For example, consider the logical statement: “If I move the switch on the wall up, the light will turn on.” At first glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there’s more to it than this. In this example, a more complete statement would be: “If I move the switch on the wall up and the light bulb is good and the power is on, the light will turn on.”

If we look at these two statements as logical expressions and use logical terminology, we can reduce the first statement to:

Light = Switch

This means nothing more than that the light will follow the action of the switch, so that when the switch is up/on/true/1 the light will also be on/true/1. Conversely, if the switch is down/off/false/0 the light will also be off/false/0.

Looking at the second version of the statement, we have a slightly more complex expression:

Light = Switch and Bulb and Power

When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need some special symbols to denote these functions in a logical diagram. There are three fundamental logical operations, from which all other functions, no matter how complex, can be derived. These functions are named and, or, and not. Each of these has a specific symbol and a clearly-defined behaviour.

AND. The AND operation is represented by adot(.) or by the absence of an operator. E.g.x.y=z    xy=z  are all read as x AND y=z. the logical operation AND is interpreted to mean that  z=1 if and only if x=1 and y=1 otherwise z=0

OR. The operation is represented by a + sign for example, x+y=z is interpreted as x OR y=z meaning that z=1 if x=1 or y=1 or if both x=1 and y=1. If both x and y are 0, then z=0

NOT. This operation is represented by a bar or a prime. For example x′= x=z is interpreted as NOT x =z meaning that z is what x is not

It should be noted that although the AND andthe OR operation have some similarity with the multiplication and addition respectively in binary arithmetic, however one should note that an arithmetic variable may consist of many digits. A binary logic variable is always 0 or 1.

e.g. in binary arithmetic, 1+1=10 while in binary logic 1+1=1

 

3.3 Basic Gate

 

(Note: Symbol and its truth table is shown on this page at last)

The basic building blocks of a computer are called logical gates or just gates.  Gates are basic circuits that have at least one (and usually more) input and exactly one output. Input and output values are the logical values true and false. In computer architecture it is common to use 0 for false and 1 for true. Gates have no memory. The value of the output depends only on the current value of the inputs.  A useful way of describing the relationship between the inputs of gates and their output is the truth table. In a truth table, the value of each output is tabulated for every possible combination of the input values.

We usually consider three basic kinds of gates, and-gates, or-gates, and not-gates (or inverters). 

 

Ø  The AND Gate

The AND gate implements the AND function. With the gate shown to the left, both inputs must have logic 1 signals applied to them in order for the output to be a logic 1. With either input at logic 0, the output will be held to logic 0.               

There is no limit to the number of inputs that may be applied to an AND function, so there is no functional limit to the number of inputs an AND gate may have. However, for practical reasons, commercial AND gates are most commonly manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or 16 pins, for practical size and handling. A standard 14-pin package can contain four 2-input gates, three 3-input gates, or two 4- input gates, and still have room for two pins for power supply connections.

 

Ø  The OR Gate

The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal counterpart, allows the output to be true (logic 1) if any one or more of its inputs are true. Verbally, we might say, “If it is raining OR if I turn on the sprinkler, the lawn will be wet.” Note that the lawn will still be wet if the sprinkler is on and it is also raining. This is correctly reflected by the basic OR function.

In symbols, the OR function is designated with a plus sign (+). In logical diagrams, the symbol below designates the OR gate.

As with the AND function, the OR function can have any number of inputs. However, practical commercial OR gates are mostly limited to 2, 3, and 4 inputs, as with AND gates.

 

Ø  The NOT Gate, or Inverter

The inverter is a little different from AND andOR gates in that it always has exactly one input as well as one output. Whatever logical state is applied to the input, the opposite state will appear at the output.

The NOT function, as it is called, isnecesasary in many applications and highly useful in others. A practical verbal application might be:

The door is NOT locked = You may enter

In the inverter symbol, the triangle actually denotes only an amplifier, which in digital terms means that it “cleans up” the signal but does not change its logical sense. It is the circle at the output which denotes the logical inversion. The circle could have been placed at the input instead, and the logical meaning would still be the same

 

3.4 Combined gates

 

(Note: Symbol and its truth table is shown on this page at last)

Sometimes, it is practical to combine functions of the basic gates into more complex gates, for instance in order to save space in circuit diagrams. In this section, we show some such combined gates together with their truth tables. 

Ø  The NAND-gate   

The NAND-gate, like the and-gate can take an arbitrary number of inputs. 

The truth table clearly shows that the NAND operation is the complement of the AND

Ø  The nor-gate

The nor-gate, like the or-gate can take an arbitrary number of inputs.

Ø  The exclusive-or-gate

The exclusive-or-gate is similar to an or-gate. It can have an arbitrary number of inputs, and its output value is 1 if and only if exactly one input is 1 (and thus the others 0). Otherwise, the output is 0. 

Let us limit ourselves to gates with n inputs. The truth tables for such gates have 2n lines. Such a gate is completely defined by the output column in the truth table. The output column can be viewed as a string of 2n binary digits. How many different strings of binary digits of length 2n are there? The answer is 22n, since there are 2k different strings of k binary digits, and if k=2n, then there are 22n such strings. In particular, if n=2, we can see that there are 16 different types of gates with 2 inputs.

Number system(dld)

2. Numeric systems

 

The numeric system we use daily is the decimal system, but this system is not convenient for machines since the information is handled codified in the shape of on or off bits; this way of codifying takes us to the necessity of knowing the positional calculation which will allow us to express a number in any base where we need it.

 

2.1 Radix number systems

 

The numeric system we use daily is the decimal system, but this system is not convenient for machines since the information is handled codified in the shape of on or off bits; this way of codifying takes us to the necessity of knowing the positional calculation which will allow us to express a number in any base where we need it.

A base of a number system or radix defines the range of values that a digit may have.  

In the binary system or base 2, there can be only two values for each digit of a number, either a “0” or a “1”. 

In the octal system or base 8, there can be eight choices for each digit of a number: 

“0”, “1”, “2”, “3”, “4”, “5”, “6”, “7”. 

In the decimal system or base 10, there are ten different values for each digit of a number: 

“0”, “1”, “2”, “3”, “4”, “5”, “6”, “7”, “8”, “9”. 

In the hexadecimal system, we allow 16 values for each digit of a number: 

“0”, “1”, “2”, “3”, “4”, “5”, “6”, “7”, “8”, “9”, “A”, “B”, “C”, “D”, “E”, and “F”. 

Where “A” stands for 10, “B” for 11 and so on.

 

2.2 Conversion among radices

 

·         Convert from Decimal to Any Base

Let`s think about what you do to obtain each digit. As an example, let`s start with a decimal number 1234 and convert it to decimal notation. To extract the last digit, you move the decimal point left by one digit, which means that you divide the given number by its base 10. 

     1234/10 = 123 + 4/10

The remainder of 4 is the last digit. To extract the next last digit, you again move the decimal point left by one digit and see what drops out. 

     123/10 = 12 + 3/10

The remainder of 3 is the next last digit. You repeat this process until there is nothing left. Then you stop. In summary, you do the following:

 

   Quotient    | Remainder   

1234/10 =         123       4        

123/10 =             12        3          

12/10 =                 1         2           

1/10 =                   0         1   (Stop when the quotient is 0) 

                                

         (Write in reverse order)   1 2 3 4   (Base 10) 

 

=>Now, let`s try a nontrivial example. Let`s express a decimal number 1341 in binary notation. Note that the desired base is 2, so we repeatedly divide the given decimal number by 2.

Quotient   | Remainder

1341/2 =         670        1    

670/2 =           335        0      

335/2 =           167        1     

167/2 =             83        1      

83/2 =               41        1

41/2 =               20        1

20/2 =               10        0

10/2 =                 5        0

5/2 =                   2        1

2/2 =                   1        0

1/2 =                   0        1      (Stop when the quotient is 0)

(Write in reverse order)        1 0 1 0 0 1 1 1 1 0 1 (BIN; Base 2)

 

ð  Let`s express the same decimal number 1341 in octal notation.                

Quotient | Remainder

1341/8 =        167        5

167/8 =             20        7

20/8 =                  2        4

2/8 =                    0        2 (Stop when the quotient is 0)

                (Write in reverse order)         2 4 7 5 (OCT; Base 8) 

 

ð  Let`s express the same decimal number 1341 in hexadecimal notation.                

Quotient | Remainder

1341/16 =         83       13

83/16 =                5        3

5/16 =                   0        5 (Stop when the quotient is 0)

(Write in reverse order)      5 3 D (HEX; Base 16)

 

In conclusion, the easiest way to convert fixed point numbers to any base is to convert each part separately. We begin by separating the number into its integer and fractional part. The integer part is converted using the remainder method, by using a successive division of the number by the base until a zero is obtained. At each division, the reminder is kept and then the new number in the base r is obtained by reading the remainder from the lat remainder upwards.  

The conversion of the fractional part can be obtained by successively multiplying the fraction with the base. If we iterate this process on the remaining fraction, then we will obtain successive significant digit. This methods form the basis of the multiplication methods of converting fractions between bases. 

 

||Example|| Convert the decimal number 3315 to hexadecimal notation. What about the hexadecimal equivalent of the decimal number 3315.3?

                   Quotient   | Remainder

3315/16 =        207        3

207/16 =          12          15

12/16 =              0           12 (Stop when the quotient is 0)

(Write in reverse order)      C F 3 (HEX; Base 16)  

                                                                                (HEX; Base 16)

                                                          Product  | Integer Part                                  

            0.3*16   =                   4.8        4

           0.8*16   = 12.8       12 —–C

           0.8*16   = 12.8       12 —–C

           0.8*16   = 12.8       12——C

             :                    

             :   

0.4 C C C … (Note: No reverse)

Thus, 3315.3 (DEC) –> CF3.4CCC… (HEX)

 

·         Convert From Any Base to Decimal

Let`s think more carefully what a decimal number means. For example, 1234 means that there are four boxes (digits); and there are 4 one`s in the right-most box (least significant digit), 3 ten`s in the next box, 2 hundred`s in the next box, and finally 1 thousand`s in the left-most box (most significant digit). The total is 1234: 

     Original Number:         1             2             3              4

  How Many Tokens:       1              2              3             4

 Digit/Token Value:         1000       100         10          1  

   Value:                                1000 +   200  +   30  +       4  =  1234

or simply,  1*1000 + 2*100 + 3*10 + 4*1 = 1234

Thus, each digit has a value: 10^0=1 for the least significant digit, increasing to 10^1=10, 10^2=100, 10^3=1000, and so forth. 

Likewise, the least significant digit in a hexadecimal number has a value of 16^0=1 for the least significant digit, increasing to 16^1=16 for the next digit, 16^2=256 for the next, 16^3=4096 for the next, and so forth. Thus, 1234 means that there are four boxes (digits); and there are 4 one`s in the right-most box (least significant digit), 3 sixteen`s in the next box, 2 256`s in the next, and 1 4096`s in the left-most box (most significant digit). The total is: 

     1*4096 + 2*256 + 3*16 + 4*1 = 4660

 

 

||Example|| Convert 234.14 expressed in an octal notation to decimal.

Solution: 2*82 + 3*81 + 4*80+1*8-1 + 4*8-2         

= 2*64 +3*8 +4*1 +1/8 +4/64

=156.1875

 

||Example|| Convert the hexadecimal number 4B3 to decimal notation. What about the decimal equivalent of the hexadecimal number 4B3.3? 

Solution:

       Original Number:     4              B              3  .         3

      How Many Tokens:  4              11            3              3

      Digit/Token Value: 256            16           1             0.0625

      Value:                         1024 +       176  +    3  +        0.1875  = 1203.1875 

 

||Example|| Convert 234.14 expressed in an octal notation to decimal. 

Solution:

       Original Number:     2              3                4           .  1              4

      How Many Tokens:   2              3               4               1               4

      Digit/Token Value:    64           8                1            0.125      0.015625

      Value:                             128 +     24  +       4  +        0.125 +   0.0625    = 156.1875

 

·         Relationship between Binary – Octal and Binary-hexadecimal

As demonstrated by the table below, there is a direct correspondence between the binary system and the octal system, with three binary digits corresponding to one octal digit. Likewise, four binary digits translate directly into one hexadecimal digit. 

    

BIN

OCT

HEX

DEC

0000

00

0

0

0001

01

1

1

0010

02

2

2

0011

03

3

3

0100

04

4

4

0101

05

5

5

0110

06

6

6

0111

07

7

7

1000

10

8

8

1001

11

9

9

1010

12

A

10

1011

13

B

11

1100

14

C

12

1101

15

D

13

1110

16

E

14

1111

17

F

15

 

With such relationship, In order to convert a binary number to octal, we partition the base 2 number into groups of three starting from the radix point, and pad the outermost groups with 0`s as needed to form triples. Then, we convert each triple to the octal equivalent. 

For conversion from base 2 to base 16, we use groups of four.

 Consider converting 101102 to base 8:

101102 = 0102  1102 = 28  68 = 268

Notice that the leftmost two bits are padded with a 0 on the left in order to create a full triplet.

Now consider converting 101101102 to base 16:

101101102 = 10112  01102 = B16  616 = B616

(Note that `B` is a base 16 digit corresponding to 1110. B is not a variable.)

The conversion methods can be used to convert a number from any base to any other base, but it may not be very intuitive to convert something like 513.03 to base 7. As an aid in performing an unnatural conversion, we can convert to the more familiar base 10 form as an intermediate step, and then continue the conversion from base 10 to the target base. As a general rule, we use the polynomial method when converting into base 10, and we use the remainder and multiplication methods when converting out of base 10.

 

2.3 Numeric complements

The radix complement of an n digit number y in radix b is, by definition, bn – y. Adding this to x results in the value x + bn – y or x – y + bn. Assuming y <= x, the result will always be greater than bn and dropping the initial `1` is the same as subtracting bn, making the result x – y + bn -bn or just x – y, the desired result.

The radix complement is most easily obtained by adding 1 to the diminished radix complement, which is (b^n – 1) – y. Since (b^n – 1) is the digit b – 1 repeated n times (because bn – 1 = bn -1n = (b – 1)(b(n-1) + b(n-2)+ … + b + 1) = (b – 1)b(n-1) + … + (b – 1), see also binomial numbers), the diminished radix complement of a number is found by complementing each digit with respect to b – 1 (that is, subtracting each digit in y from b – 1). Adding 1 to obtain the radix complement can be done separately, but is most often combined with the addition of x and the complement of y.

In the decimal numbering system, the radix complement is called the ten`s complement and the diminished radix complement the nines` complement. 

In binary, the radix complement is called the two`s complement and the diminished radix complement the ones` complement. The naming of complements in other bases is similar. 

 

·         Decimal example

To subtract a decimal number y from another number x using the method of complements, the ten`s complement of y (nines` complement plus 1) is added to x. Typically, the nines` complement of y is first obtained by determining the complement of each digit. The complement of a decimal digit in the nines` complement system is the number that must be added to it to produce 9. The complement of 3 is 6, the complement of 7 is 2, and so on. Given a subtraction problem: 

  873  (x)

 – 218  (y)

The nines` complement of y (218) is 781. In this case, because y is three digits long, this is the same as subtracting y from 999. (The number of 9`s is equal to the number of digits of y.)

Next, the sum of x, the nines` complement of y, and 1 is taken:  

873  (x)

+ 781  (complement of y)

+   1  (to get the ten`s complement of y)

===== 

1655 

The first “1” digit is then dropped, giving 655, the correct answer.

If the subtrahend has fewer digits than the minuend, leading zeros must be added which will become leading nines when the nines` complement is taken. For example:

  48032  (x)

–   391  (y)

becomes the sum:  

48032  (x)

+ 99608  (nines` complement of y)

+     1  (to get the ten`s complement)

======= 

147641

Dropping the “1” gives the answer: 47641

·         Binary example

The method of complements is especially useful in binary (radix 2) since the ones` complement is very easily obtained by inverting each bit (changing `0` to `1` and vice versa). And adding 1 to get the two`s complement can be done by simulating a carry into the least significant bit. For example:  

01100100  (x, equals decimal 100)

– 00010110  (y, equals decimal 22) 

becomes the sum:  

01100100  (x)

+ 11101001  (ones` complement of y)

+        1  (to get the two`s complement)

========== 

101001110

Dropping the initial “1” gives the answer: 01001110 (equals decimal 78)

 

·          Signed fixed point numbers

Up to this point we have considered only the representation of unsigned fixed point numbers. The situation is quite different in representing signed fixed point numbers. There are four different ways of representing signed numbers that are commonly used: sign-magnitude, one`s complement, two`s complement, and excess notation. We will cover each in turn, using integers for our examples.  The Table below shows for a 3-bit number how the various representations appear.

 

Decimal

Unsigned

Sign-Mag

1’s comp

2’s comp

Excess 4

7

111

6

110

5

101

4

100

3

011

011

011

011

111

2

010

010

010

010

110

1

001

001

001

001

101

+0

000

000

000

000

100

-0

100

111

000

100

-1

101

110

111

011

-2

110

101

110

010

-3

111

100

101

001

-4

100

000

 

Table1. 3 bit number representation

 

·         Signed Magnitude Representation 

The signed magnitude (also referred to as sign and magnitude) representation is most familiar to us as the base 10 number system. A plus or minus sign to the left of a number indicates whether the number is positive or negative as in +1210 or -1210. In the binary signed magnitude representation, the leftmost bit is used for the sign, which takes on a value of 0 or 1 for `+` or `-`, respectively. The remaining bits contain the absolute magnitude. 

Consider representing (+12)10 and (-12)10 in an eight-bit format:

(+12)10 = (00001100)2

(-12)10 = (10001100)2

The negative number is formed by simply changing the sign bit in the positive number from 0 to 1. Notice that there are both positive and negative representations for zero: +0= 00000000 and  -0= 10000000.

 

·         One`s Complement Representation

The one`s complement operation is trivial to perform: convert all of the 1`s in the number to 0`s, and all of the 0`s to 1`s. See the fourth column in Table1 for examples. We can observe from the table that in the one`s complement representation the leftmost bit is 0 for positive numbers and 1 for negative numbers, as it is for the signed magnitude representation. This negation, changing 1`s to 0`s and changing 0`s to 1`s, is known as complementing the bits. Consider again representing (+12)10 and (-12)10 in an eight-bit format, now using the one`s complement representation:

(+12)10 = (00001100)2

(-12)10 = (11110011)2

Note again that there are representations for both +0 and -0, which are 00000000 and 11111111, respectively. As a result, there are only 28 – 1 = 255 different numbers that can be represented even though there are 28different bit patterns.

The one`s complement representation is not commonly used. This is at least partly due to the difficulty in making comparisons when there are two representations for 0. There is also additional complexity involved in adding numbers.

 

·         Two`s Complement Representation

The two`s complement is formed in a way similar to forming the one`s complement: complement all of the bits in the number, but then add 1, and if that addition results in a carry-out from the most significant bit of the number, discard the carry-out.

Examination of the fifth column of Table above shows that in the two`s complement representation, the leftmost bit is again 0 for positive numbers and is 1 for negative numbers. However, this number format does not have the unfortunate characteristic of signed-magnitude and one`s complement representations: it has only one representation for zero. To see that this is true, consider forming the negative of (+0)10, which has the bit pattern: (+0)10 = (00000000)2

Forming the one`s complement of (00000000)2 produces (11111111)2 and adding 1 to it yields (00000000)2, thus (-0)10= (00000000)2. The carry out of the leftmost position is discarded in two`s complement addition (except when detecting an overflow condition). Since there is only one representation for 0, and since all bit patterns are valid, there are 28 = 256 different numbers that can be represented.

Consider again representing (+12)10 and (-12)10 in an eight-bit format, this time using the two`s complement representation. Starting with (+12)10 =(00001100)2, complement, or negate the number, producing (11110011)2.

Now add 1, producing (11110100)(BIN), and thus (-12)10 = (11110100)2 :

(+12)10 = (00001100)2

(-12)10 = (11110100)2

There is an equal number of positive and negative numbers provided zero is considered to be a positive number, which is reasonable because its sign bit is 0. The positive numbers start at 0, but the negative numbers start at -1, and so the magnitude of the most negative number is one greater than the magnitude of the most positive number. The positive number with the largest magnitude is +127, and the negative number with the largest magnitude is -128. There is thus no positive number that can be represented that corresponds to the negative of -128. If we try to form the two`s complement negative of -128, then we will arrive at a negative number, as shown below:

(-128)10 = (10000000)2

(-128)10 = (01111111)2

(-128)10 + (+0000001)2

(-128)10         ——)2

(-128)10 = (10000000)2

The two`s complement representation is the representation most commonly used in conventional computers.

 

·         Excess Representation

In the excess or biased representation, the number is treated as unsigned, but is “shifted” in value by subtracting the bias from it. The concept is to assign the smallest numerical bit pattern, all zeros, to the negative of the bias, and assign the remaining numbers in sequence as the bit patterns increase in magnitude. A convenient way to think of an excess representation is that a number is represented as the sum of its two`s complement form and another number, which is known as the “excess” or “bias” Once again, refer to Table 2.1, the rightmost column, for examples.

Consider again representing (+12)10 and (-12)10 in an eight-bit format but now using an excess 128 representation. An excess 128 number is formed by adding 128 to the original number, and then creating the unsigned binary version. For (+12)10, we compute (128 + 12 = 140)10 and produce the bit pattern (10001100)2. For (-12)10, we compute (128 + -12 = 116)10 and produce the bit pattern (01110100)2

(+12)10 = (10001100)2

(-12)10 = (01110100)2

Note that there is no numerical significance to the excess value: it simply has the effect of shifting the representation of the two`s complement numbers.

There is only one excess representation for 0, since the excess representation is simply a shifted version of the two`s complement representation. For the previous case, the excess value is chosen to have the same bit pattern as the largest negative number, which has the effect of making the numbers appear in numerically sorted order if the numbers are viewed in an unsigned binary representation.

Thus, the most negative number is (-128)10 = (00000000)2 and the most positive number is (+127)10 = (11111111)2. This representation simplifies making comparisons between numbers, since the bit patterns for negative numbers have numerically smaller values than the bit patterns for positive numbers. This is important for representing the exponents of floating point numbers, in which exponents of two numbers are compared in order to make them equal for addition and subtraction.

 

·         Choosing a bias:

The bias chosen is most often based on the number of bits (n) available for representing an integer.  To get an approximate equal distribution of true values above and below 0, the bias should be   2(n-1) or 2((n-1)-1)

 

2.4 Floating point representation

 

Floating point is a numerical representation system in which a string of digits represent a real number. The name floating point refers to the fact that the radix point (decimal point or more commonly in computers, binary point) can be placed anywhere relative to the digits within the string. A fixed point is of the form a * bn where a is the fixed point part often referred to as the mantissa, or significand of the number b represents the base and n the exponent. Thus a floating point number can be characterized by a triple of numbers: sign, exponent, and significand.

 

·         Normalization

A potential problem with representing floating point numbers is that the same number can be represented in different ways, which makes comparisons and arithmetic operations difficult. For example, consider the numerically equivalent forms shown below:

3584.1 * 100 = 3.5841 * 103 = .35841 * 104.

In order to avoid multiple representations for the same number, floating point numbers are maintained in normalized form. That is, the radix point is shifted to the left or to the right and the exponent is adjusted accordingly until the radix point is to the left of the leftmost nonzero digit. So the rightmost number above is the normalized one. Unfortunately, the number zero cannot be represented in this scheme, so to represent zero an exception is made. The exception to this rule is that zero is represented as all 0`s in the mantissa.

If the mantissa is represented as a binary, that is, base 2, number, and if the normalization condition is that there is a leading “1” in the normalized mantissa, then there is no need to store that “0” and in fact, most floating point formats do not store it. Rather, it is “chopped off” before packing up the number for storage, and it is restored when unpacking the number into exponent and mantissa. This results in having an additional bit of precision on the right of the number, due to removing the bit on the left. This missing bit is referred to as the hidden bit, also known as a hidden 1. 

For example, if the mantissa in a given format is 1.1010 after normalization, then the bit pattern that is stored is 1010- the left-most bit is truncated, or hidden.

 

2.5 Possible floating point format

In order to choose a possible floating point format for a given computer, the programmer must take into consideration the following:

The number of words used (i.e. the total number of bits used.)

The representation of the mantissa (2s complement etc.)

The representation of the exponent (biased etc.)

The total number of bits devoted for the mantissa and the exponent

The location of the mantissa (exponent first or mantissa first) 

Because of the five points above, the number of ways in which a floating point number may be represented is legion. Many representation use the format of sign bit to represent a floating point where the leading bit represents the sign

Sign

Exponent

Mantissa

 

·          The IEEE standard for floating point

The IEEE (Institute of Electrical and Electronics Engineers) has produced a standard for floating point format arithmetic in mini and micro computers. (i.e. ANSI/IEEE 754-1985). This standard specifies how single precision (32 bit) , double precision (64 bit) and Quadruple (128 bit) floating point numbers are to be represented, as well as how arithmetic should be carried out on them.

 

-General layout

The three fields in an IEEE 754 float

 

Sign

Exponent

Fraction

Binary floating-point numbers are stored in a sign-magnitude form where the most significant bit is the sign bit, exponent is the biased exponent, and “fraction” is the significant without the most significant bit.

 

-Exponent biasing

The exponent is biased by (2^(e-1)) – 1, where e is the number of bits used for the exponent field (e.g. if e=8, then (2^(8 – 1) – 1 = 128 – 1 = 127 ). Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two`s complement, the usual representation for signed values, would make comparison harder. To solve this, the exponent is biased before being stored by adjusting its value to put it within an unsigned range suitable for comparison.

For example, to represent a number which has exponent of 17 in an exponent field 8 bits wide: 

exponentfield = 17 + (2^(8 – 1) – 1 )= 17 + 128 – 1 = 144.

 

 -Single Precision

The IEEE single precision floating point standard representation requires a 32 bit word, which may be represented as numbered from 0 to 31, left to right. The first bit is the sign bit, S, the next eight bits are the exponent bits, `E`, and the final 23 bits are the fraction `F`:

                 

S              EEEEEEEE                FFFFFFFFFFFFFFFFFFFFFFF 

0              1               8            9                                       31

To convert decimal 17.15 to IEEE Single format: 

Convert decimal 17 to binary 10001. Convert decimal 0.15 to the repeating binary fraction 0.001001 Combine integer and fraction to obtain binary 10001.001001 Normalize the binary number to obtain 1.0001001001×24

Thus, M = m-1 =0001001001 and E = e+127 = 131 = 10000011. 

The number is positive, so S=0.      Align the values for M, E, and S in the correct fields. 

0              10000011            00010010011001100110011

Note that if the exponent does not use all the field allocated to it, there will be leading 0`s while for the mantissa, the zero`s will be filled at the end.

 

-Double Precision

The IEEE double precision floating point standard representation requires a 64 bit word, which may be represented as numbered from 0 to 63, left to right. The first bit is the sign bit, S, the next eleven bits are the exponent bits, `E`, and the final 52 bits are the fraction `F`:

               

S              EEEEEEEEEEE    FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

0              1                  11      12

 

-Quad Precision

The IEEE Quad precision floating point standard representation requires a 128 bit word, which may be represented as numbered from 0 to 127, left to right. The first bit is the sign bit, S, the next fifteen bits are the exponent bits, `E`, and the final 128 bits are the fraction `F`:

               

S              EEEEEEEEEEEEEEE           FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

0              1                            15             16

 

 

 

Single

Double

Quadruple

No. of sign bit

1

1

1

No. of exponent bit

8

11

15

No. of fraction

23

52

111

Total bits used

32

64

128

Bias

127

1023

16383

Table2 Basic IEEE floating point format

 

2.6 Binary code

 

Internally, digital computers operate on binary numbers. When interfacing to humans, digital processors, e.g. pocket calculators, communication is decimal-based. Input is done in decimal then converted to binary for internal processing. For output, the result has to be converted from its internal binary representation to a decimal form.  Digital system represents and manipulates not only binary number but also many other discrete elements of information. 

 

·         Binary coded Decimal

In computing and electronic systems, binary-coded decimal (BCD) is an encoding for decimal numbers in which each digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for printing or display and faster decimal calculations. Its drawbacks are the increased complexity of circuits needed to implement mathematical operations and a relatively inefficient encoding. It occupies more space than a pure binary representation. In BCD, a digit is usually represented by four bits which, in general, represent the values/digits/characters 0-9

To BCD-encode a decimal number using the common encoding, each decimal digit is stored in a four-bit nibble.

Decimal

0

1

2

3

4

5

6

7

8

9

BCD

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

Thus, the BCD encoding for the number 127 would be: 

0001  0010  0111

The position weights of the BCD code are 8, 4, 2, 1. Other codes (shown in the table) use position weights of 8, 4, -2, -1 and 2, 4, 2, 1. 

An example of a non-weighted code is the excess-3 code where digit codes is obtained from their binary equivalent after adding 3. Thus the code of a decimal 0 is 0011, that of 6 is 1001, etc  

Decimal

Digit

8 4 2 1

Code

8 4 -2 -1

Code

2 4 2 1

Code

Excess-3

Code

0

0 0 0 0

0 0 0 0

0 0 0 0

0 0 1 1

1

0 0 0 1

0 1 1 1

0 0 0 1

0 1 0 0

2

0 0 1 0

0 1 1 0

0 0 1 0

0 1 0 1

3

0 0 1 1

0 1 0 1

0 0 1 1

0 1 1 0

4

0 1 0 0

0 1 0 0

0 1 0 0

0 1 1 1

5

0 1 0 1

1 0 1 1

1 0 1 1

1 0 0 0

6

0 1 1 0

1 0 1 0

1 1 0 0

1 0 0 1

7

0 1 1 1

1 0 0 1

1 1 0 1

1 0 1 0

8

1 0 0 0

1 0 0 0

1 1 1 0

1 0 1 1

9

1 0 0 1

1 1 1 1

1 1 1 1

1 1 0 0

it is very important to understand the difference between the conversion of a decimal number to binary and the binary coding of a decimal number. In each case, the final result is a series of bits. The bits obtained from conversion are binary digit. Bits obtained from coding are combinations of 1`s and 0`s arranged according to the rule of the code used. e.g.  the binary conversion of 13 is 1101; the BCD coding of 13 is 00010011

 

·         Error-Detection Codes 

Binary information may be transmitted through some communication medium, e.g. using wires or wireless media. A corrupted bit will have its value changed from 0 to 1 or vice versa.  To be able to detect errors at the receiver end, the sender sends an extra bit (parity bit) with the original binary message. 

 A parity bit is an extra bit included with the n-bit binary message to make the total number of 1`s in this message (including the parity bit) either odd or even.  If the parity bit makes the total number of 1`s an odd (even) number, it is called odd (even)  parity.  The table shows the required odd (even) parity for a 3-bit message

Three-Bit Message

Odd Parity Bit

Even Parity Bit

X

Y

Z

P

P

0

0

0

1

0

0

0

1

0

1

0

1

0

0

1

0

1

1

1

0

1

0

0

0

1

1

0

1

1

0

1

1

0

1

0

1

1

1

0

1

No error is detectable if the transmitted message has 2 bits in error since the total number of 1`s will remain even (or odd) as in the original message. 

In general, a transmitted message with even number of errors cannot be detected by the parity bit. 

 

– Gray code

The Gray code consist of 16 4-bit code words to represent the decimal Numbers 0 to 15. For Gray code, successive code words differ by only one bit from one to the next    

Gray Code

Decimal Equivalent

0 0 0 0

0

0 0 0 1

1

0 0 1 1

2

0 0 1 0

3

0 1 1 0

4

0 1 1 1

5

0 1 0 1

6

0 1 0 0

7

1 1 0 0

8

1 1 0 1

9

1 1 1 1

10

1 1 1 0

11

1 0 1 0

12

1 0 1 1

13

1 0 0 1

14

1 0 0 0

15

 

2.7 Character Representation

 

Even though many people used to think of computers as “number crunchers”, people figured out long ago that it`s just as important to handle character data. 

Character data isn`t just alphabetic characters, but also numeric characters, punctuation, spaces, etc. Most keys on the central part of the keyboard (except shift, caps lock) are characters. Characters need to be represented. In particular, they need to be represented in binary. After all, computers store and manipulate 0`s and 1`s (and even those 0`s and 1`s are just abstractions. The implementation is typically voltages). 

Unsigned binary and two`s complement are used to represent unsigned and signed integer respectively, because they have nice mathematical properties, in particular, you can add and subtract as you`d expect. 

However, there aren`t such properties for character data, so assigning binary codes for characters is somewhat arbitrary. The most common character representation is ASCII, which stands for American Standard Code for Information Interchange. 

There are two reasons to use ASCII. First, we need some way to represent characters as binary numbers (or, equivalently, as bit-string patterns). There`s not much choice about this since computers represent everything in binary. 

If you`ve noticed a common theme, it`s that we need representation schemes for everything. However, most importantly, we need representations for numbers and characters. Once you have that (and perhaps pointers), you can build up everything you need. 

The other reason we use ASCII is because of the letter “S” in ASCII, which stands for “standard”. Standards are good because they allow for common formats that everyone can agree on. 

Unfortunately, there`s also the letter “A”, which stands for American. ASCII is clearly biased for the English language character set. Other languages may have their own character set, even though English dominates most of the computing world (at least, programming and software). 

Even though character sets don`t have mathematical properties, there are some nice aspects about ASCII. In particular, the lowercase letters are contiguous (`a` through `z` maps to 97(DEC) through 122(DEC)). The upper case letters are also contiguous (`A` through `Z` maps to 65(DEC) through 90(DEC)). Finally, the digits are contiguous (`0` through `9` maps to 48(DEC) through 57(DEC)).

Since they are contiguous, it`s usually easy to determine whether a character is lowercase or uppercase (by checking if the ASCII code lies in the range of lower or uppercase ASCII codes), or to determine if it`s a digit, or to convert a digit in ASCII to an integer value.

The characters between 0 and 31 are generally not printable (control characters, etc). 32 is the space character.  Also note that there are only 128 ASCII characters. This means only 7 bits are required to represent an ASCII character. However, since the smallest size representation on most computers is a byte, a byte is used to store an ASCII character. The Most Significant bit(MSB) of an ASCII character is 0.

The difference in the ASCII code between an uppercase letter and its corresponding lowercase letter is 20(HEX). This makes it easy to convert lower to uppercase (and back) in hex (or binary).

 

·         Other Character Codes 

While ASCII is still popularly used, another character representation that was used (especially at IBM) was EBCDIC, which stands for Extended Binary Coded Decimal Interchange Code (yes, the word “code” appears twice). This character set has mostly disappeared. EBCDIC does not store characters contiguously, so this can create problems alphabetizing “words”.

One problem with ASCII is that it`s biased to the English language. This generally creates some problems. One common solution is for people in other countries to write programs in ASCII. 

Other countries have used different solutions, in particular, using 8 bits to represent their alphabets, giving up to 256 letters, which is plenty for most alphabet based languages (recall you also need to represent digits, punctuation, etc). 

However, Asian languages, which are word-based, rather than character-based, often have more words than 8 bits can represent. In particular, 8 bits can only represent 256 words, which is far smaller than the number of words in natural languages. 

Thus, a new character set called Unicode is now becoming more prevalent. This is a 16 bit code, which allows for about 65,000 different representations. This is enough to encode the popular Asian languages (Chinese, Korean, Japanese, etc.). It also turns out that ASCII codes are preserved. What does this mean? To convert ASCII to Unicode, take all one byte ASCII codes, and zero-extend them to 16 bits. That should be the Unicode version of the ASCII characters. 

The biggest consequence of using Unicode from ASCII is that text files double in size. The second consequence is that endianness begins to matter again. Endianness is the ordering of individually addressable sub-units (words, bytes, or even bits) within a longer data word stored in external memory. The most typical cases are the ordering of bytes within a 16-, 32-, or 64-bit word, where endianness is often simply referred to as byte order. The usual contrast is between most versus least significant byte first, called big-endian and little-endian respectively.  Big-endian places the most significant bit, digit, or byte in the first, or leftmost, position. Little-endian places the most significant bit, digit, or byte in the last, or rightmost, position. Motorola processors employ the big-endian approach, whereas Intel processors take the little-endian approach. Table below illustrates how the decimal value 47,572 would be expressed in hexadecimal and binary notation (two octets) and how it would be stored using these two methods.

Number

Hexadecimal

Big-Endian

Little-Endian

B9D4

B9D4

4D9B

Binary

10111001

10111001

11010100

11010100

11010100

10111001

With single bytes, there`s no need to worry about endianness. However, you have to consider that with two byte quantities. 

While C and C++ still primarily use ASCII, Java has already used Unicode. This means that Java must create a byte type, because char in Java is no longer a single byte. Instead, it`s a 2 byte Unicode representation.

 

Exercise

 

1. The state of a 12-bit register is 010110010111. what is its content if it represents :

i. Decimal digits in BCD code       ii.    Decimal digits in excess-3 code iii. Decimal digits in 8 4-2-1 code      iv.  Natural binary number

2. The result of an experiment fall in the range -4 to +6. A scientist wishes to read the result into a computer and then process them. He decides to use a 4-bit binary code to represents each of the possible inputs. Devise a 4-bit binary code of representing numbers in the range of -4 to 6  

3. The (r-1)`s complement of a base-6 numbers is called the 5`s complement. Explain the procedure for obtaining the 5`s complement of base 6  numbers. Obtain the 5`s complement of (3210)(BASE 6) 

4. Design a three bit code to represent each of the six digits of the base 6 number system.

5. Represent the decimal number -234.125 using the IEEE 32- bit (single) format.

introduction to digital logic design 

Introduction

 

A digital computer stores data in terms of digits (numbers) and proceeds in discrete steps from one state to the next. The states of a digital computer typically involve binary digits which may take the form of the presence or absence of magnetic markers in a storage medium, on-off switches or relays. In digital computers, even letters, words and whole texts are represented digitally.

Digital Logic is the basis of electronic systems, such as computers and cell phones. Digital Logic is rooted in binary code, a series of zeroes and ones each having an opposite value. This system facilitates the design of electronic circuits that convey information, including logic gates. Digital Logic gate functions include and, or and not. The value system translates input signals into specific output. Digital Logic facilitates computing, robotics and other electronic applications. 

Digital Logic Design is foundational to the fields of electrical engineering and computer engineering. Digital Logic designers build complex electronic components that use both electrical and computational characteristics. These characteristics may involve power, current, logical function, protocol and user input. Digital Logic Design is used to develop hardware, such as circuit boards and microchip processors. This hardware processes user input, system protocol and other data in computers, navigational systems, cell phones or other high-tech systems.