Cross Module Reference Cross Module Reference abbreviated as XMR is a very useful concept in Verilog HDL (as well as system Verilog). However it seems to be less known among many users of Verilog. XMR is a mechanism built into Verilog to globally reference (i.e., across the modules) to any nets, tasks, functions etc. Using XMR, one can refer to any object of a module in any other module, irrespective of whether they are present below or above its hierarchy. Hence, a XMR can be a: Downward reference OR Upward reference Consider the following hierarchy: Module A Net x Instance P of Module B Net x Instance M of Module D Net x Instance Q of Module C Net x Instance N of Module E Net x Instance R of Module B Net x Instance M of Module D Net x ...
Posts
Showing posts with the label Important Concepts
(追記)
(追記ここまで)
Why Reset? A Reset is required to initialize a hardware design for system operation and to force an ASIC into a known state for simulation. A reset simply changes the state of the device/design/ASIC to a user/designer defined state. There are two types of reset, what are they? As you can guess them, they are Synchronous reset and Asynchronous reset. Synchronous Reset A synchronous reset signal will only affect or reset the state of the flip-flop on the active edge of the clock. The reset signal is applied as is any other input to the state machine. Advantages: The advantage to this type of topology is that the reset presented to all functional flip-flops is fully synchronous to the clock and will always meet the reset recovery time. Synchronous reset logic will synthesize to smaller flip-flops, particularly if the reset is gated with the logic generating the d-input. But in such a case, the combinational logic gate count grows, so the overall gate count savings may not be ...
(追記)
(追記ここまで)
Designing a FSM is the most common and challenging task for every digital logic designer. One of the key factors for optimizing a FSM design is the choice of state coding, which influences the complexity of the logic functions, the hardware costs of the circuits, timing issues, power usage, etc. There are several options like binary encoding, gray encoding, one-hot encoding, etc. The choice of the designer depends on the factors like technology, design specifications, etc. One-hot encoding In one-hot encoding only one bit of the state vector is asserted for any given state. All other state bits are zero. Thus if there are n states then n state flip-flops are required. As only one bit remains logic high and rest are logic low, it is called as One-hot encoding. Example : If there is a FSM, which has 5 states. Then 5 flip-flops are required to implement the FSM using one-hot encoding. The states will have the following values: S0 - 10000 S1 - 01000 S2 - 00100 S3 - 00010 S4 - 00001 Adv...
(追記)
(追記ここまで)
Random Access Memory (RAM) is a type of computer data storage. Its mainly used as main memory of a computer. RAM allows to access the data in any order, i.e random. The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data. You can access any memory cell directly if you know the row and column that intersect at that cell. Most of the RAM chips are volatile types of memory, where the information is lost after the power is switched off. There are some non-volatile types such as, ROM, NOR-Flash. SRAM: Static Random Access Memory SRAM is static, which doesn't need to be periodically refreshed, as SRAM uses bistable latching circuitry to store each bit. SRAM is volatile memory. Each bit in an SRAM is stored on four transistors that form two cross-coupled inverters. This storage cell has two stable states which are used to denote...
(追記)
(追記ここまで)
Direct memory access (DMA) is a feature of modern computers that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Principle of DMA DMA is an essential feature of all modern computers, as it allows devices to transfer data without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each piece of data from the source to the destination. This is typically slower than copying normal blocks of memory since access to I/O devices over a peripheral bus is generally slower than normal system RAM. During this time the CPU would be unavailable for any other tasks involving CPU bus access, although it could continue doing any work which did not require bus access. A DMA transfer essentially copies a block of memory from one device to an...
(追記)
(追記ここまで)
Every flip-flop has restrictive time regions around the active clock edge in which input should not change. We call them restrictive because any change in the input in this regions the output may be the expected one (*see below). It may be derived from either the old input, the new input, or even in between the two. Here we define, two very important terms in the digital clocking. Setup and Hold time.
The setup time is the interval before the clock where the data must be held stable. The hold time is the interval after the clock where the data must be held stable. Hold time can be negative, which means the data can change slightly before the clock edge and still be properly captured. Most of the current day flip-flops has zero or negative hold time.
In the above figure, the shaded region is the restricted region. The shaded region is divided into two parts by the dashed line. The left hand side part of shaded region is the setup time period and the right hand side part is the h...
(追記)
(追記ここまで)
Parallel and serial data transmission are most widely used data transfer techniques. Parallel transfer have been the preferred way for transfer data. But with serial data transmission we can achieve high speed and with some other advantages. In parallel transmission n bits are transfered simultaneously, hence we have to process each bit separately and line up them in an order at the receiver. Hence we have to convert parallel to serial form. This is known as overhead in parallel transmission. Signal skewing is the another problem with parallel data transmission. In the parallel communication, n bits leave at a time, but may not be received at the receiver at the same time, some may reach late than others. To overcome this problem, receiving end has to synchronize with the transmitter and must wait until all the bits are received. The greater the skew the greater the delay, if delay is increased that effects the speed. Another problem associated with parallel transmission is crosstalk. ...
(追記)
(追記ここまで)