Transaction Level Modeling(TLM)-to-RTL Design Flow

by kailassenan on 2008-01-03
source: Copyright©2008 Kailas Senan. All rights reserved.
As System-on-Chip (SoC) designs grow ever larger, design and verification flows are changing. A rich mix of features, increased software content, high intellectual property (IP) use and submicron implementation technology have semiconductor and system companies searching for new electronic system level (ESL)-based design flows.
An emerging trend is a transaction level modeling (TLM)-to-register transfer level (RTL) design flow, though a set of requirements needs to be addressed to ensure a successful transition to this new flow.
This article uses the generic term TLM to refer to a higher abstraction level model. Where necessary, it will be prefixed with cycle-accurate, cycle-approximate or functionally accurate to denote the accuracy level.

Drivers for change
The advancement of the chip manufacturing technology with process nodes moving from 130nm to 90nm to 65nm and even smaller geometries have an impact on chips on the drawing boards today, and render as ineffective an RTL-only flow as ineffective. This is due to:

* SoC size: RTL simulation is too slow to fully analyze and verify the chip. This can be addressed through:

1. Hardware-based simulation: that is, emulation such as ZeBu (Zero Bugs) from EVE or acceleration
2. Raising the abstraction level: moving the design description to a higher level abstraction to perform design analysis and verification tasks
* SoC complexity: This requires an RTL-only flow to deal with many details to perform architectural analysis. It also requires the ability to manage massive amounts of data. Moreover, decisions can only be validated late in the process after all of the RTL code is available. The way to address this is:
1. Raise the abstraction level of the design description, and create more abstract models of the target chip early in the project
2. Ensure availability of matching RTL blocks for these models
* SOC software: Software content is increasing rapidly. Design teams need to ensure the availability of an accurate software execution engine early in the project to ensure that software is not the schedule bottleneck for product release. Typically, software developers need less than five millions instruction-per-second (ips) simulation speed and a complete model. These needs can be addressed in two ways:
1. Through the use of an abstract, yet accurate software model of the target chip or virtual prototype and the stimulus environment
2. Through the use of the actual RTL model mapped into a hardware box. Design teams use a transaction-level software model of the stimulus environment or use hardware interfaces

The semiconductor industry has responded to these challenges is by applying the concept of platform-based design "" an approach to rapidly create chips comprised of a set of configurable application/domain specific IP, on-chip-buses and software. An example is the OMAP family of platforms from Texas Instruments.
While platform-based design addresses SoC complexity, verification and software remain bottlenecks or worse. It may be that a new derivative product could be conceived in a relative short time, while the completed chip including embedded software will have to be validated before signoff.
In wireless applications, where software has become the dominant factor impacting the product schedule, companies have spend millions to create software models or virtual prototypes of their application platforms, enabling them to distribute these models to hundreds of developers before actual hardware prototypes are available.

TLM is the "Right" Level of Abstraction
Different teams in a design project -- architects, verification engineers and software developers -- have a need for high-speed, abstract models of the target platform/chip. Each has a different requirement. Referred to as the abstraction-level (above RTL), they use it as Transaction-Level Modeling (TLM) for:

* Verification: cycle-accurate modeling (~ 100k cps)
Since the TLM result compares to RTL code and RTL code needs to be included in simulation, models need to have cycle-accurate interfaces.
* Architects: cycle approximate (~ 1M ips)
Since they need an accurate hardware model and require a decent amount of software to be executed, architects often use cycle-approximate models because they run faster than a cycle accurate model and are quicker to develop.
* Software developers: function accurate (10-100M ips)
Software developers are not interested in the hardware model. They want an execution engine or virtual prototype that corresponds to the final hardware or prototype.

Since these three groups have little interaction, it is typical that multiple and not necessarily mutually consistent models are developed. The architect develops or buys a cycle-approximate model, where the software developer subcontracts for a functionally accurate virtual prototype.

The cost of TLM models
Developing and maintaining TLM models is often perceived as an added cost. Typically, cycle-accurate models are the most expensive to develop, followed by cycle approximate and then functionally accurate. In Figure 1, the approximate "useful life" of a model is shown as part of the product development cycle. It highlights the window of opportunity for these models, as well as the importance of availability of these models.
Since both cycle-approximate and cycle-accurate models are based on a structural modeling style "" buses, processors and peripherals are modeled as individual components and connected in SystemC "" they are reusable and typically are developed by IP vendors. Specifically, cycle accurate-models require detailed information of the architecture. They may take up to 30% of the cost of developing RTL code.
Conversely, SoC functional models/virtual prototypes are built from separate software modules and are delivered as a monolithic software model "" a single executable where software can be developed and debugged.
Complete functional platforms are faster and cheaper to develop than cycle accurate/approximate models. Companies that develop these virtual prototypes have developed their own proprietary modeling style to ensure high simulation speed and provide automation for building comprehensive platforms for timely availability.
Reuse of models across multiple projects is a must and major semiconductor and systems companies are building comprehensive TLM-centric modeling infrastructures to garner TLM's benefits.

Click here for Fig.1

The semiconductor industry is at a point where many realize that standardization of TLM modeling across multiple accuracy levels is important. While OSCI and SPIRIT are focused on establishing industry-wide TLM related standards, many proprietary incarnations of TLM exist because it offers a competitive advantage.

The TLM+RTL SoC design flow
The typical SoC used for high-volume, consumer applications contains multiple programmable elements, such as central processing units (CPUs) and digital signal processors (DSPs). These chips increasingly will be designed and validated at the TLM level, using a mix of C for functional blocks and SystemC for on-chip interconnect, before being implemented at the RTL. Functionally accurate virtual prototypes will be created to ensure that software developers have early access to the platform to develop the embedded software, including applications.
While software-based RTL simulation no longer provides the performance required, the SoC design flow is similar to RTL-based design. That's because is not only a TLM flow, but a TLM+RTL flow focused on:,br>

* Ensuring availability of a high-speed execution platform of the target platform/chip early in the project to:
1. perform architecture and software optimization (hardware/software balance)
2. perform pre-silicon software development (large-scale software development)
3. perform full-chip functional verification (early testbench development, reference model)

* Ensuring a predictable path from the model to the RTL model of the SoC

This also means that tasks previously done at RTL will be done at the transaction level, including system assembly and functional verification. New tasks are emerging as well to ensure good correlation between TLM and RTL.

Different design teams have different requirements. For the wireless market, the ability to distribute a large number of virtual prototypes to many software developers is essential, while accuracy is critical to the automotive industry. These needs may change within a single project.
Figure 2 captures the commonality of desired target flows. The article analyzes how different user requirements and constraints, such as IP availability, affect how designers execute on this flow and derive a set of requirements specifically for the simulation environment.
Moving forward, chip, system assembly and analysis, and IP selection will be performed using a system/transaction-level (STL) model of the chip, goal of SPIRIT 2.0, while chip-level functional verification will be performed at TLM. Roadblocks are transaction-level IP availability and transaction-level modeling standards.
Since system validation is done at the start of the design process using high-speed transaction-level models of the chip, it appears that the role of emulation will diminish since most software development will be performed using virtual prototypes. However, there are two reasons why the role of emulation could actually increase:

* Not all modules will have sufficiently accurate high-level models but only RTL representations. To maintain the speed of a virtual prototype, emulation technology linked to a virtual prototype is required to maintain the required speed.

* Since the final RTL is too big, and the test too long to run chip-level verification on a software simulator, emulators are needed to run the same test that ran on the virtual prototype on the actual RTL code.

Now that software can run early in the project cycle, tools that analyze and optimize the architecture for performance and power will be highly valued. Specifically, power estimation tools that link forward to the physical implementation will be in demand.
Support for the software developer has become an imperative. The concept of a "virtual platform model," a software model of an actual semiconductor hardware platform, has gained popularity. Design teams need RTL IP to be accompanied by matching system-level IP. This is done with processors and DSPs where IP vendors provide system-level models of their processors. It extends to other blocks as well, including memory/cache controllers, USB, PCI-X and DMA controllers.

* IP providers will need to ensure that RTL models of an IP block are "equivalent" to the system/transaction-level model. IP providers that don't provide the corresponding system-level models will have an increasingly hard time to sell their IP.

* The same holds true for buses and protocols. Although more of an industry issue, standardization of system/transaction-level interfaces is moving forward and designers will increasingly be able to generate the corresponding RTL from the system level representations of these interconnect systems.

With the dominance of C/SystemC-based modeling, C/SystemC based synthesis will be viewed as an essential component in this flow. Customers increasingly first develop a C/SystemC model of any new modules first and need/want to have an automated path from C/SystemC to hardware. Clearly such a capability dramatically increases their productivity.
Even though a relatively large part of the final chip will originate from IP blocks, new modules will often be required to gain a competitive advantage. In fact, having such a synthesis capability may increase the absolute number of "originally designed gates" in future chips.
By adding a level of abstraction, TLM-to-RTL equivalence checking becomes an imperative and will be used to:

* Ensure the functional equivalence between a handcrafted RTL block and its corresponding C function

* To have a functional check on TLM-to-RTL synthesis tools.

A unified debugging environment is required. With design representations on multiple levels of abstraction, combining hardware and software, design comprehension and simulation debug will be difficult unless a cohesive, unified debugging environment that spans ESL, RTL and analog is in place. Ideally, such a debug systems should connect to various simulation engines, SystemVerilog, VHDL, SystemC, C and extend to emulation and prototyping solutions. That's because unexpected results means something in the flow has gone awry. The first debug step is to localize the problem that requires the complete model.

Enablers for the TLM to RTL SoC design flow
The way to accelerate industry adoption of a TLM-to-RTL design flow is through a complete ecosystem of tools and IP that incorporates multiple abstraction levels is required. Key elements of this ecosystem are:

* Modeling standards for IP. With rapid adoption of 90nm flows and the move to 65nm and below, reuse of proven blocks plays a key role in creating new design platforms. IP refers to RTL IP, though the notion of IP is expanding to reusable modules that have multiple consistent representations -- system/transaction-level models that simulate fast and are consistent with a synthesizable RTL model.

IP is "fuel" for design/simulation engines. If there is no fuel with the right characteristics, even the most advanced engine technology will fail. Although there are a number of IP companies making progress to provide IP on multiple levels of abstraction, the missing model problem prevents adoption of such a flow. For semiconductor and system companies, the charter is clear -- demand that third-party IP has consistent system and RTL representations, and establish internal IP programs to provide this for proprietary IP. Standardization bodies SPIRIT and OSCI have started this effort. Other standards, such as SCEMI, a transaction-based interface, and standards to interface with software debuggers are required as well.

Without widely adopted standards, progress toward the right "fuel" will be slow and adoption of the TLM-to-RTL SoC design flow will be slower.

* High-speed, multi-level simulation. The ability to simulate a complete SoC above the 10M cps is key, whether design teams use TLM descriptions, a mix of TLM and RTL or a complete RTL representation. Software-only RTL simulation does not provide the required speed.

Instead, it is the careful integration of transaction-based software modeling with hardware-based simulation engines with synthesizable transactors that provide the solution design teams need. They will compare the modeling effort with the cost of hardware to optimize the verification flow. A key element in this solution is synthesizable models that bridge abstraction levels. Inclusion of software-based RTL simulation using existing Application Programming Interfaces (APIs) remains an additional requirement.

* Unified debugging methodology. Designing SoCs with multiple processors modeled on multiple levels of abstraction, combining advanced hardware with extensive software components, is a challenge. Debugging simulation models, especially when a portion of the design originates with IP blocks and various software modules that have not been designed by the designer running the simulation, will be even more challenging.

Design comprehension tools with browsers that let the designer explore the design at user-selected abstraction levels are needed. Such tools must address the static description and the dynamic behavior of the design. These tools must be able to switch from a transaction view to a detailed RTL view. These tools can find root causes of a problem quickly, even though this means diving into unfamiliar code. Moreover, these tools need to work with a variety of simulation engines and languages, specifically C, C++, SystemC and SystemVerilog. To use the "fuel" analogy once again, once the fuel is available and engines are in place, design teams need an easy-to-use control panel for visibility when the engines are running. And, if there is problem, these tools can quickly pinpoint the cause.

Once these elements are integrated with other ESL tools for performance and power predictions and those that can account for physical/placement effects of the implementation, design teams will have an effective design flow.
Although high-level synthesis has been viewed by many as true "ESL design," except for more data flow-centric applications and interface/infrastructure synthesis, it will not have the same impact as RTL synthesis had 15 years ago. Most application-centric platforms reuse proven RTL modules. Industry expectations set the percentage of gates originating from new designs to be less than 10%, which does not include OCB or NOC logic.

While the next-generation TLM-to-RTL design flow isn't an elusive dream, it will become reality only when the semiconductor industry focuses on creating and delivering IP with consistent views on different, well-defined levels of abstraction. The EDA industry must provide high-performance, multi-level simulation environments to support these abstraction levels, incorporating advanced, multi-level debug methodology. Once they are in place, the conditions will be perfect for the adoption of such a flow.
© copyright 1999-2023, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.