The Art of Successful ASIC Design
by bedford on 2003-05-23
source: Readings
source: Readings
Version 1.0
Abstract
This paper deals with efficient handling of design projects leading to their successful completion not just their completion but completion on time with as little silicon respins as possible.
The following pages outline some of the many guidelines that a responsible engineer can follow towards designing a successful ASIC. I have concentrated on Verification and RTL Simulation.
I do intend to have a chatty style for this paper since none of my profs will be readings this.
1.Introduction
Successful ASIC Design in today's state of art of the IC Design would mean on thing "successful RTL Design". Ideas of yore translated onto breadboard. Ideas of today translate into RTL models. Comes now the all important question. What is RTL?
2.What is RTL?
Simply put,it is register transfer level. In what way is it different from gate level abstraction? In what way is it different from transistor level abstraction? The amount of information the human sees is more in each case. The highest being in transistor level abstraction and the lowest in the RTL. Note one very important point the correspondence between RTL, Gate level and Transistor level is exact. The corresp-ondence many be one-to-many or many-to-one but the correspondence is exact.THE FOCUS OF RTL ABSTRACTION IS ON THE CYCLE BY CYCLE , STATE BY STATE CORRECTNESS.
3.The Basic Principles of Verification Process.
DISCIPLINED USER PRINCIPLE : Designers who limit their degrees of freedom in writing RTL will encounter least anomalies in tool behavior. For example in their in synthesis tools.
Box.1
Modern Design Process involves multi-million gates. The enabling of such a design process has been the successful evolution of synthesis tools. The usage of these tools requires a certain amount of discipline on part of the user. The Reason primarily being "all tools cannot understand each other in totality". There exist areas of ambiguity. Limiting our degrees of freedom in design modeling leads to more exact replication of our desired ideas. Agreement on the verifiable Subset is one such discipline. The Verifiable subset as suggested by Lionel Bening is outlined. This subset might vary from project to project on agreement but for a project should be held constant.Box 2 shows an example of this subset.
always else initial parameter assign end
inout posedge endcase input reg case endfunction module tri endmodule negedge tri0
default function or tri1 defparam if output wire
Box.2
FUNDAMENTAL VERIFICATION PRINCIPLE: Implementation of the specification to avoid unnecessarily complex and unverifiable designs.
Box.3
Never begin coding without a clear understanding of the design target. Ponder upon what design alternatives may exist. Do not go in for a design just because your boss said so. There might be a better alternative in place. Concurrent development of tests by the Verification Team is impossible without a proper understanding of the verification Target. Note that the "Verification Target" is always greater than the "Design Target" generally.
What are the questions a designer should ask ?
1.What are the Block-to-Block Interface Requirements?
2.What are the Design Alternatives?
Doing RTL coding Early on LOCKS THE PROJECT into specific implementation details inhibiting designer creativity.
Summary of a Good Design.
1.Accept u r Design is Complex. So Split it into manageable pieces. It improves verification efficiency if I may call it so. You can parallelize the verification workload onto many Verification team engineers.
2.There has to be "Clear Interface Contract" between designers of multiple blocks. It also helps Verification team guys in understanding the design. It helps them to assist the designers in the debugging process. The Interface Contract can also be called upon to play a role in the sign off process to ensure that no designer has left any loose ends.
3.Modular environments enables parallel development of block level testbenches during RTL implementation.
Most of todays Design work centre around Interfaces. This is primarily because the concepts of Design Reuse, Hard Macros (Synthesized RTL) , Soft Macros (Vanilla RTL, Unsynthesized RTL), IPs . These concepts are enabling SOC design as well as accelerating Design Cycles. The Interface Based Approach is of paramount importance in today's Design Philosophy.
RETAIN USEFUL INFORMATION PRINCIPLE: A Single process within a design flow should never discard information that a different process within the flow must reconstruct at a significant cost.
Box.5
This Principle is central to all of software engineering. It simply means why do the same thing again and again. If your doing it then there is no proper communication somewhere. So figure out where the problem is and fix it.
ORTHOGONAL VERIFICATION PRINCIPLE: Functional behavior, logical equivalence and physical characteristics should be treated as orthogonal verification processes within a design flow.
Box. 6
This principle means that functional behavior, logical equivalence and physical characteristics are independent of one another atleast when we are modeling the design. This compartmentalization helps in designers to efficiently manage complexity. For example, two logically equivalent models do not necessarily mean that they are functionally equivalent.
FUNCTIONAL OBSERVATION PRINCIPLE: A Methodology must be established that provides a mechanism and measuring specified function behavior.
Box.7
The Answer to how much of functionality I have covered is called COVERAGE. The above principle says that all ASIC verification processes must be white box testing. The verification engineer must be allowed to probe into the design to a certain extent. This requires an understanding on his part of the interfaces. Which translates as good documentation by the designer of the module concerned.
COVERAGE
--------
A Question every verification engineer has to answer "is that have I covered every aspect of the Design". When I started out my career I thought this was a simple question to answer. But it turns out to be quite a formidable one. Primarily because the Verification Target is always more involved than the actual design. Especially so in today SOC designs. Each individual design might be picture perfect. But together its cacophony. A more indepth understanding of the specifications and especially their interfaces are required by verification engineers.
Summary of Questions during Coverage:
1.Have I covered every aspect of the Design?
2.Are all Aspects of the Design Being Verified?
3.Is one particular section of the Design being checked too much at the expense of another?
4.Can I authorize to Tape Off?
Box.8
Coverage Metrics
----------------
Janick Bergeon rightly says "managers love metrics and measurements". My Boss tells the Q&A team says " show me a graph!". Guess it simplifies his life. Metrics certainly play a crucial role in the decision making process. Comes the question. What are these "Metrics"?
1.Ad-Hoc Metrics
2.Programming metrics
3.State machine and Arc Coverage metrics
4.User Defined Metrics
Ad-hoc Metrics: 1. Bug detection frequency
2.Length of simulation after last bug found
3.3. Total number of simulation cycles
The above are some examples of Ad-hoc Metrics. In general these metrics might give a false sense of confidence to our verification effort. I have seen verification engineers say that their confidence level was high just on the fact that they did not see bugs for a few days. No bugs for a few days to me might mean we are looking in the wrong place or our verification environment is somehow masking the same or worst case we screwed up in out testcase development, which directly reflects in our understanding of the specifications. The time given for verification engineers to digest the specification should thus be substantial to avoid these problems. "Haste at this time always makes waste".
Programming code Metrics: These Metrics are the standard ones in the verification process. There are a lot of Metrics for which certain tools are available.
1.Line Coverage
2.Branch Coverage
3.Path Coverage
4.Expression Coverage
5.Toggle Coverage
The Most Important aspect of all the tools is that they say that the code has been exercised and assumes you have checked its validity. That is it gives no mention on functional correctness. That means that if there is 100% coverage then it does not mean 100% observability (detection of error) or 100% functional coverage.
State machine Metrics: This Metric checks to see if all the states have been visited. It in no checks if a particular state has been reached in a particular path.
User defined Metrics: These Metrics are evolved by the verification team to direct the Verification process in a particular direction of concern. The same cannot be done without proper Design overview.
PROJECT LINTING PRINCIPLE: To ensure productive use of design and analysis tools, as well as improving communication between Design engineers , project specific coding rules must be enforced automatically within the design flow.
A linting tool is usually the Designer's first line of defence against design errors. Its cost effective and importantly time effective. It therefore must be embedded directly in the design flow (in Makefiles, for example. Perl is a common mechanism to implement the same.) Doing this enforces a Lint Methodology early on the design cycle. In a way it buttresses the DISCIPLINED USER PRINCIPLE.
Project Simulation:
RTL SIMULATION TESTING IS INHERENTLY INCOMPLETE.
Design tests to test the logic architecture. Random tests find unexpected corner cases. There are Two kinds of designs Processor based Designs and Communication based Designs.
For Processor Based Designs the random cases might be high cache miss rates, memory addressing bank conflicts , and high no-op counts between instructions. For communication oriented designs , high transaction rates and severely unbalanced transaction loads may bring out design errors.
Project Simulation has two key Phases. The Debugging Phase and the Regression Phase.
Each phase has a certain important role. Verification team members have to prepare for the roles of each phase.
Debugging Phase: In this phase you want to catch bugs. So you need all the tools you need to catch the bugs. And the most important tool is INFORMATION.
So in the Debugging phase we need
1.Access to Internal Signal Values.
2.Event Logging.
3.Fast turn around on Design Changes.
Regression Phase: In this phase you are are checking "if all is well". Essentially checking your model with a golden model. And in general checking the results with a golden results or whatever.
Regression phase is when your all system check. What you need here is INFORMATION. But you need Information of a different kind. Moving the Information you used during Debugging Phase will drown you in Information, Eat into your Compute Farm Power resources.
So in the Regression phase we need
1.Limit access to internal signal values.
2.Reduce Event Logging.
3.Slower Turn Around on Design Changes.
The Verification Manager can thus opt for two Verification models. One for Human User Interface Success during the debugging phase. The other for enhanced CPU Performance.
VISIT MINIMIZATION PRINCIPLE: For Best simulation (and any EDA tool) performance , minimize the frequency and granularity of visits.
Understanding the Simulation tool always pays rich dividends. There are two ways the simulation tool works "Event Driven" and "Rank Ordered". An Event Ordered simulator only evaluates logic when input states change value. Rank Ordered during compilation and evaluated in that rank order during simulation. Rank ordered combinational logic prior to simulation greatly reduces the event management overhead during simulation.
4.Afterthoughts
What are prevalent challenges with current verification methodologies?
1.Designing unnecessarily complex block interfaces due to lack of proper specification.
2.Wasting process time identifying verification targets.
3.Maximizing verification coverage while minimizing test regressions.
4.Determining when the verification process is complete.
5.Observing lower level bugs during the simulation process.
What is the most common misconception?
A common misconception made by many design design engineers is the reasoning that specification is simply documentation written simultaneously with RTL code or written as a last step in the design.
5. References
1.Lionel Bening and Harry Foster, Principles of Verifiable RTL Design , Kluwer Academic Publishers 2000 ,
2.Janick Bergeron, Writing Testbenches Functional Verification of HDL Models,Kluwer Academic Publishers 2000 ,
3.Thomas D Tessier , Re-thinking your Verification Strategies for Multimillion Gate FPGAs,