Skip to main content

CSC258

·8 mins

Abstractions #

Engineers and scientists use abstractions to manage complexity. Before, we relied on working operating systems and software development environment to build useful tools; however, in this course, we are diving deeper into lower level: we are disclosing the digital logic by building gates out of electronic switches - MOFSETs. Then, gates are used to build combinational logic - which all CS students have familarized themselves with after first year through building boolean equations in programming and logical condition in math - and sequential logic - which is introduced through feeding back wires, more on it later!

From Switches to Logic Gates #

At the heart of modern electronics are semiconductor devices, which can be engineered to act as tiny electronic switches. These switches, known as MOSFETs (Metal-Oxide-Semiconductor Field-Effect Transistors), are the fundamental building blocks of all digital logic.

MOSFETs: The Basic Switch #

There are two primary types of MOSFETs:

  • n-type MOSFET (NMOS): An NMOS transistor has a gate, a source, and a drain. When a high voltage is applied to the gate, it creates a conductive channel between the source and drain, allowing current to flow. It’s like a switch that is “ON” when the control signal is high.
  • p-type MOSFET (PMOS): A PMOS transistor works in the opposite way. When a low voltage is applied to the gate, it creates a conductive channel and allows current to flow. It’s a switch that is “ON” when the control signal is low.

Building Logic Gates with CMOS #

By combining NMOS and PMOS transistors, we can create Complementary Metal-Oxide-Semiconductor (CMOS) logic gates. This is the dominant technology used in modern integrated circuits.

  • Inverter (NOT Gate): The simplest logic gate is a NOT gate. It’s built with one PMOS and one NMOS transistor. When the input is high, the NMOS turns on, pulling the output low. When the input is low, the PMOS turns on, pulling the output high.

  • NAND and NOR Gates: More complex gates like NAND and NOR are also built with specific arrangements of PMOS and NMOS transistors. For example, a NAND gate uses two PMOS transistors in parallel and two NMOS transistors in series.

  • AND and OR Gates: While possible to build directly, AND and OR gates are typically created by combining NAND or NOR gates with inverters, as this is more efficient in CMOS design.

This ability to construct basic logic gates from simple switches is the first major step up the ladder of abstraction in computer hardware.

Combinational Logic #

Combinational logic circuits are the next level of abstraction. Their outputs are determined solely by their current inputs. These circuits are memoryless.

Representing Logic: Minterms and Maxterms #

To design and analyze combinational circuits, we use Boolean algebra. Any Boolean function can be expressed in a canonical form:

  • Sum of Products (SOP): This form uses minterms. A minterm is a product (AND) term that includes every variable in the function, either in its true or complemented form. The function is then the sum (OR) of the minterms that result in a ‘1’ output.
  • Product of Sums (POS): This form uses maxterms. A maxterm is a sum (OR) term that includes every variable. The function is the product (AND) of the maxterms that result in a ‘0’ output.

Simplifying Logic: Karnaugh Maps (K-maps) #

Boolean expressions can become complex. Karnaugh maps are a graphical method used to simplify them. By arranging the function’s truth table in a specific grid, we can visually group adjacent ‘1’s (for SOP) or ‘0’s (for POS) to find a simpler, equivalent expression. This simplification is crucial for reducing the number of logic gates needed to implement the circuit, making it smaller, faster, and more power-efficient.

Circuit Delays: Propagation Delay #

In the real world, logic gates are not instantaneous. Propagation delay is the time it takes for the output of a gate to change after its inputs have changed. This delay is a critical factor in the performance of digital circuits. The total delay of a combinational circuit is determined by the longest path from any input to any output, known as the critical path.

Common Combinational Devices #

  • Adders:
    • Half Adder: A simple circuit that adds two single bits and produces a sum and a carry output.
    • Full Adder: Adds three bits (two input bits and a carry-in bit) and produces a sum and a carry-out. Full adders are the building blocks of larger adders.
    • Ripple-Carry Adder: Multiple full adders can be chained together to add multi-bit numbers. The carry-out of one full adder becomes the carry-in of the next, causing the carry to “ripple” through the circuit.

Sequential Logic #

Unlike combinational circuits, the outputs of sequential circuits depend on both the current inputs and the past sequence of inputs. They have “memory”.

The Basic Memory Element: Latches and Flip-Flops #

  • SR Latch: The most fundamental sequential circuit, built by cross-coupling two NOR or NAND gates. It can store a single bit of information. However, it has an undesirable “invalid” state when both inputs are asserted.
  • SR Flip-Flop: A clocked version of the latch. It only changes state when a clock signal is applied, providing more control.
  • D Flip-Flop (Data/Delay Flip-Flop): A common and useful flip-flop that has a single data input D. The output Q simply takes on the value of D on the active edge of the clock. It’s essentially a way to store one bit of data.
  • T Flip-Flop (Toggle Flip-Flop): This flip-flop has a single input T. If T is high, the output Q “toggles” (flips to its opposite state) on the clock edge. If T is low, the output holds its current state.
  • JK Flip-Flop: A versatile flip-flop that combines the features of the SR and T flip-flops. It can set, reset, hold, or toggle its output.

Building with Flip-Flops: Counters #

By connecting flip-flops together, we can create counters:

  • Asynchronous (Ripple) Counter: The output of one flip-flop serves as the clock input for the next. This design is simple but suffers from a cumulative delay, making it slow for large counters.
  • Synchronous Counter: All flip-flops share a common clock signal and change state simultaneously. This design is faster and more reliable than an asynchronous counter, as it avoids the ripple delay.

The Processor #

The processor, or Central Processing Unit (CPU), is the brain of the computer, combining combinational and sequential logic to execute instructions.

Memory #

  • SRAM (Static RAM): Built using latches (similar to flip-flops). It’s very fast but also volatile (loses data when power is off) and less dense, meaning it takes up more space per bit. Often used for cache memory.
  • DRAM (Dynamic RAM): Stores each bit as a charge in a capacitor. It’s much denser and cheaper than SRAM but requires periodic “refreshing” to maintain the charge. It’s also slower. This is what we typically refer to as “main memory” or “RAM”.
  • Tristate Buffer: A special type of buffer that has a third “high-impedance” state, in which it effectively disconnects its output. Tristate buffers are essential for allowing multiple devices to share a common bus (a set of wires), like a data bus.

MIPS Architecture and Datapath #

  • MIPS: A popular RISC (Reduced Instruction Set Computer) architecture. RISC designs emphasize a small, simple, and highly optimized set of instructions.
  • Datapath: The collection of hardware components that perform the actual data processing, including the Arithmetic Logic Unit (ALU), registers, and multiplexers. The datapath is responsible for executing the arithmetic and logical operations specified by the instructions.

Control Unit #

The control unit is the part of the processor that directs the datapath’s operations. It interprets instructions and generates control signals that tell the datapath components what to do. The basic operation of a processor is the fetch-decode-execute cycle:

  1. Fetch: The control unit fetches the next instruction from memory.
  2. Decode: The instruction is decoded to determine what operation to perform.
  3. Execute: The control unit sends signals to the datapath to execute the operation.

Assembly Language #

Assembly language is a low-level programming language that provides a symbolic representation of the machine code instructions that a processor can execute.

MIPS Instructions #

MIPS instructions are categorized into three main types:

  • R-type (Register): Instructions that operate on data stored in registers. (e.g., add, sub, and)
  • I-type (Immediate): Instructions that operate on a register and an immediate value (a constant encoded in the instruction itself). (e.g., addi, lw, sw)
  • J-type (Jump): Instructions that perform unconditional jumps to a new address in the program.

These instructions are translated into 32-bit machine code that the processor can understand. The format of the machine code determines which fields correspond to the operation, source and destination registers, and immediate values. These fields are then used by the control unit to generate the appropriate control signals for the datapath.

Function Calls and the Stack #

To support function calls, MIPS uses a stack, which is a region of memory used for temporary storage. When a function is called, a stack frame is created to store:

  • Return Address: The address of the instruction to return to after the function completes.
  • Arguments: The parameters passed to the function.
  • Saved Registers: Registers that need to be preserved so they can be restored before the function returns.
  • Local Variables: Variables used only within the function.

There are specific conventions for which registers are used for arguments ($a0-$a3), return values ($v0-$v1), and which registers the caller ($t0-$t9) or callee ($s0-$s7) is responsible for saving. Following these conventions is crucial for writing correct and modular assembly code.