I've inherited some VHDL code I need to extend. There's a piece in there that looks like a bug to me, but while I'm long-time C developer, I've no experience in VHDL. I understand that processes essentially run in parallel, but within a process, does order matter? In the following code, a CPLD is receiving serial ADC data from a microcontroller. The data stream contains the results from two separate 8-bit conversions and is being loaded into two registers, one bit at a time. However it looks like at the end of receiving the first byte of ADC data, the MSB of this byte is being overwritten with the MSB from the second byte before it's loaded into its register. Is this a bug? I've removed some of the code to make the sample more readable.
LIBRARY ieee;
USE ieee.std_logic_1164.all;
USE ieee.std_logic_arith.all;
use ieee.std_logic_unsigned.all;
entity CPLD is port(
ADC_Clk : in std_logic;
ADC_Data : in std_logic;
end CPLD;
architecture arch_CPLD of CPLD is
signal ADC_Counter : std_logic_vector (7 downto 0);
signal ADC_Reg_1 : std_logic_vector (7 downto 0);
signal ADC_Temp : std_logic_vector (7 downto 0);
begin
-- Shift in data from two 8-bit ADC channels
-- ADC_Counter incremented by another process on rising edge of ADC_Clk
ADC_Shift: process(ADC_Clk)
begin
if (falling_edge(ADC_Clk)) then
-- Shift in 8 bits from ADC data channel 1
if (ADC_Counter = 5 ) then -- skip over sync bits
ADC_Temp(7) <= ADC_Data; -- get MSB of data
elsif (ADC_Counter = 6 ) then
ADC_Temp(6) <= ADC_Data;
-- code omitted for clarity : getting bits 5 to 1
elsif (ADC_Counter = 12 ) then
ADC_Temp(0) <= ADC_Data; -- get LSB of data
-- shift in 8 bits from ADC data channel 2
elsif (ADC_Counter = 13 ) then
ADC_Temp(7) <= ADC_Data; -- BUG? Overwriting MSB with next byte?
ADC_Reg_1 <= ADC_Temp; -- first byte of data to Register 1
elsif (ADC_Counter = 14 ) then -- continue getting second byte
ADC_Temp(6) <= ADC_Data;
-- code omitted for clarity
end if;
end if;
end process ADC_Shift;
end arch_CPLD;
-
\$\begingroup\$ What happens when you simulate the code? Does it work the way you expect it should? \$\endgroup\$Joe Hass– Joe Hass2014年04月29日 19:05:04 +00:00Commented Apr 29, 2014 at 19:05
-
\$\begingroup\$ @JoeHass A good question, but unfortunately I haven't yet learned how to use the simulator. \$\endgroup\$Allen Moore– Allen Moore2014年04月29日 22:26:26 +00:00Commented Apr 29, 2014 at 22:26
3 Answers 3
Not knowing all the details, in principle it is not a bug. This is a process that occurs whenever there is a falling edge on the associated clock. You can imagine that all signal reads happen right before the edge, and all the writes happen at the edge.
So having inside a clocked process:
a <= b; -- after the edge, a will have the contents of b
c <= a; -- after the edge, c will have the contents of a (NOT THE CONTENTS OF b).
The order of the above assignments does not matter, and can be represented by the following schematic:
schematic
simulate this circuit – Schematic created using CircuitLab
As you can see in the above schematic, each Flip-Flop has a 'queued' value D
and it becomes the output Q
at the clock's edge. a
has b
queued, and c
has a
queued. So in a single clock, b
can't propagate all the way to c
(it requires 2 clock cycles). So specifying the signal 'queuing' in code does not need any particular order.
In your specific example, when ADC_Counter
is 13
, ADC_temp(7)
has ADC_Data
queued, and ADC_Reg_1
has ADC_Temp
queued. ADC_Reg_1
is not getting corrupted with the new data bit, in the same way that our c
is not getting corrupted with b
.
Note: This would be different if talking about variables (you'd be using the :=
operator instead of <=
), but your code only contains signals.
If ADC_Temp was a variable, you would be right. However it's a signal, and signals are VHDL's inter-process communication mechanism; a (very!) little like pipes to a C programmer.
Thus the OLD value in ADC_Temp is available at the assignment to ADC_Reg_1, despite this assignment following an assignment to ADC_Temp.
What actually happens to the first assignment in that it is postponed - scheduled to actually take place once the process has suspended, therefore the old ADC_Temp value is still preserved at the time of the second assignment.
The organisation of signal assignments in this way is sometimes called VHDL's "crown jewel" and it eliminates the race conditions that plague other simulation technologies. More info here and here...
The answer is no, what you see is not a bug. Here is my rather simplified expression of why it is expected to work correctly as written.
The first thing to emphasize is that your design is working with signals and not variables (either in the VHDL sense or that of a traditional SW programming language). The signal has a driving value and may have a whole projected future set of values making a waveform of future events scheduled at later times. That alone makes it different then a SW variable which is just the name of a location in the computer's memory for a single value.
Next, each process (the stuff from process
...end process
) is persistent (never disappears) and is considered to be operating in parallel or concurrently with every other process. That is the key -- concurrency in time.
Think about that for a second, or two. On a computer running a simulation of this system, sequentially executing instructions one after the other, how can it model the concurrency? The key lies in how time advances and that is where the process
statement comes in.
The statements inside a process are classified as sequential statements. Evaluation of all the sequential statements within a process body proceeds from top to bottom sequentially to where an invisible VHDL wait
statement lurks. A wait
statement is sensitive to one or more named signals changing values. (That would be to the falling edge of adc_clock
here). When a signal in the sensitivity changes -- has an event on it -- the wait is satisfied and evaluation resumes at the top of the process. (Did you catch that? The process is persistent so it represents an intentional infinite loop of evaluation!!!) Some special updating semantics occur at the wait
which I'll describe shortly.
The semantics of a process
mean that every process must be evaluated to its wait
statement in what is called a delta cycle. The magic is that delta cycle causes exactly zero hardware time to pass while all that evaluation was going on. (Obviously the CPU running a simulation did a lot of work costing you real wall clock time.)
So from your perspective, and that of every process, the system is concurrently evaluating all the processes.
Once all the processes have done their business, and are waiting, the drivers for each signal assigned something get updated according to the blocking signal assignments done using the <=
. This symbol essentially means, "update the left hand side signal with the value of the expression on the right when the wait statement is hit". The signal's driver is what represents the current value of the signal at the given hardware moment in time and not what its value might be in the future when the wait
statement is seen.
Now, perhaps some process updated some signals and they have changed. Another delta cycle occurs if another process was sensitive to that signal changing. This proceeds again and again and again until no signal has changed and the delta cycles end.
Now a special thing happens: Evaluation of time must occur. A decision must be made to advance to a new actual hardware time. All the signals are examined to determine when in the future one or more signals have an event on their projected waveform. The signal(s) with the shortest time to an event is selected. The hardware time immediately advances to that moment of the event and starts a delta cycle for all the processes with a wait
statement sensitive to the changing signal. Now, the whole thing starts over. Nothing happens unless something changes.
This is a bit of simplistic rendering of what occurs, but is close enough to discuss why your code did not have a bug because now you know the semantics of modeling concurrency with VHDL.
Statements within the process are so called sequential statements. It is common to think of them as executing sequentially, but I think the words evaluating sequentially is better. After all, a synthesis tool evaluates, not executes. Plus, the term evaluation allows for the blocking effect semantics of the <=
operation. That effect means all the actual updating is blocked until the point that invisible wait
statement gets seen, and consequently (and quite magically) the order of statements doesn't matter. So,
ADC_Temp(7) <= ADC_Data;
ADC_Reg_1 <= ADC_Temp;
achieves the exact same effect as
ADC_Reg_1 <= ADC_Temp;
ADC_Temp(7) <= ADC_Data;
because the values of ADC_Temp
and ADC_Data
in expressions don't change even though it feels like ADC_Temp(7)
should be changing.
It is worth stopping here and pondering this. The right hand side expressions are evaluated based on the current signal driver values of each signal in the RH expressions and those signals don't change until the wait statement is hit.
This concept of concurrency and the coding semantics of its expression in VHDL along with the vague similarity of a processes' sequential statements to software code is perhaps the hardest thing for software coders to grapple with. VHDL code looks so much like software that developers just want to use the same brain cells to think about it. Don't. Develop some new neural pathways and you are on your way to mastering VHDL code semantics.
Incidentally, this is why I refuse to call this VHDL Programming and am not afraid of smacking people who call it that. Its not programming, but it is coding. Coding hardware that is!
PS: To those experts out there who might complain about the myriad of details left out of this explanation, I understand your reservations. But too many details spoils the soup.