Blog: The case for running equivalence checking on your FPGA

A;ex Grove

Alex Grove

FirstEDA

Why we should continue to leverage ASIC methodology solutions during FPGA development

A few weeks ago I was fortunate to be asked to present at the NMI’s FPGA Network event, “What Next After Flicking the Switch?” held at TRW Conekt in Birmingham. (You’ll know where I mean if you’ve ever visited as this is a historic site; the main building has been preserved from the days of Lucas Research, built in 1960s and very different from the normal event venues.)

 

The last time I was I was there for an NMI event we were talking about functional verification of FPGAs, this time it was about debug in the lab. My opening was along the lines of “things have not gone well, we are now in the lab with our logic analyzers and have no idea why the design isn’t working! The pressure is on…”

 

Now, I am not suggesting that we should verify everything before going into the lab. The great benefit of FPGAs is that they are reprogrammable, which we should use to our advantage. (A topic in itself for a lively discussion and another blog!) So, the device isn’t working as expected and we know that there could be a functional issue (e.g. at integration); time to debug.

 

What if the problem was due to a systemic bug, for example a logic error introduced in the implementation process? How long would it be until you realised that you had a tool issue? How long would it take to isolate the problem and to get development back on schedule?

 

Being stuck in the lab struggling to bring up a design is not a comfortable experience. Your manager asks “how long?” and you have no idea but there is real pressure to provide answer.

 

The use of Equivalence Checking (EC) has been mandatory in the ASIC industry for well over a decade now, but what about EC for FPGA? The project schedule is just one example of the case for EC in FPGA. In many ways a systemic error found in the lab is a best-case scenario. Systemic errors are the sort of errors that can escape through R&D and into the product, where the cost (and credibility damage) is far greater.

 

For me, the value of EC for any company can best be defined by the question, “What is the cost of failure?” In my presentation I highlighted the following scenarios:

 

  • Life and limb
  • The environment
  • Time & money (business development costs)
  • Product success & revenue (late to market)
  • Credibility

 

Of course there are alternatives to EC, such as gate-level simulation and Aldec’s in-hardware test solution CTS. Today, such approaches are widely used in safety critical applications such as airborne electronics. With complexity however, the need for exhaustive testing becomes more apparent and it is only formal that can provide an exhaustive test at the gate-level for current FPGAs.

 

So, I see the use of EC growing in the FPGA community by augmenting existing processes. Augmenting is important for safety critical, as I don’t expect to replace tried and tested approaches in the short term. This was the case in the early days of EC in ASIC, where gate-level simulation would continue to be run after “sign-off” whilst waiting for the first engineering samples. Checking a netlist for equivalence before committing to months of gates-level simulations has to be a good thing, doesn’t it? My view, which can be applied to all verification, is that the sooner you detect a problem the sooner you can fix it. Ultimately this saves time and money (and your sanity!)Live webinar: The elusive systemic error – equivalence checking for your FPGA

 

Discover why designs teams are increasingly seeing the value of running equivalence checking during FPGA development to rapidly detect flow integrity issues and pinpoint the root cause

 

WEDNESDAY 1 JULY – 14:00-15:00 BST (15:00-16:00 CEST)