Newsletter March 2010
Open-Source hardware and software reuse
Faced with the task of porting driver firmware code from one platform to another, most are not entirely welcoming. However, consider that the underlying hardware will also see reuse in the new platform, and the task becomes easier to approach, a walk in the park even. This is the fantastic reality behind the fact that open-source hardware also means greater ease when porting driver and firmware code.
Hardware design and development, thanks to advancements made in EDA tools and design techniques, is no longer the bottleneck in heterogeneous SoC design. The greatest hurdle now facing those bringing heterogeneous platforms (every modern SoC) to market is that of software and firmware development. However, thanks to the idea behind open-source hardware, when transferring IP modules from one design to another, the work needed for implementing the driver on the new platform can be as easy as altering some headers defining the memory mapping. This, in addition to the recent enthusiasm towards developing open-source platforms for embedded-systems, means that almost the entire solution is catered for in a portable, open-source fashion.
Reuse of cores, already in other SoCs containing at least one microprocessor and an array of peripherals, are the best candidates for taking advantage of this. With the improvements in efficiency of every body's favourite portable compiler collection, GCC, most firmware and low-level driver software is now written in C, with potentially only the most performance-strict and demanding sections implemented in architecture-specific assembler code. This makes all drivers eminently portable, but of course they are only compatible with the hardware they were written for.
Leveraging open-source hardware then has the incentive of providing a two-for-one deal of functionality and convenience in that the driver or firmware will most likely be immediately usable on any given platform or architecture.
Doubtless that when experimenting with a new architecture, the task of bringing across the set of peripherals relied upon in previous designs seems like more trouble than it's worth. Now consider that task with the RTL and driver code are all in highly portable forms, and it's essentially half the headache. There's nothing better than knowing when exact same hardware is in place, you can use the exact same C driver code despite the design being based on a completely different architecture.
The author's experience of this comes from being tasked with taking a suite of IP available at OpenCores from one design to another, both based on entirely different architetures and microprocessors. The RTL was easily implemented and configured, but the best part was not having to change a single line of driver code, and watching it work at first attempt.
There are always problems inherent in aiming for software reuse, however the combination of reusable IP and driver software essentially results in a two-for-one deal. Considering the attention the software development receives in modern designs, it's good to know that when using highly portable open-source IP and low-level drivers, they start off on an effective and dependable note.
We are glad to share our experience of complex SoC design, based on OpenCores IPs.
Article by Julus Baxter, ORSoC AB
vMAGIC – Automatic Code Generation for VHDL
Perhaps you know the situation where you find yourself adapting an old design for the thirty-second time, to reuse it in yet another project, just because generics can’t express what you want to do. Or you want to provide an IP core with a lot more configuration options than you can sensibly handle with VHDL generics. Or you need to extract information from your designs and the existing tools just don’t do the trick. And you think there must be some kind of solution for this… and there is.
We were facing the very same problems in 2006 when we were creating a very general test framework for FPGA based digital hardware. There was a generic way to connect any design (in hardware) to a number of simulators, but the generation of the appropriate interface and bus decoder was very tedious work, especially when optimization techniques were involved. Thus we required a tool which could read an existing VHDL design and create interface designs to “wrap” the functionality, again in VHDL. So the vMAGIC library was born, which now integrates a complete VHDL parser, a high level API to analyze/modify/create the code, and a VHDL writer for code generation purposes.
The vMAGIC design flow usually, but not necessarily, comprises three steps. We briefly describe the flow using the bus decoder example: imagine a situation where you have an FPGA environment, which is connected to the host using some kind of bus interface. Now you want to be able to create bus decoders for every new design without writing the VHDL. Instead you build a vMAGIC application: In the first step, your application reads the user design and a template containing static parts of the bus decoder. The parser creates an intermediate format which can easily be accessed using the vMAGIC API, allowing for operations such as entity.getPort().getSignals() or architecture.add(new Process()). Your application now analyzes the user design and creates an appropriate bus decoder; as you have used a template VHDL file, only the parts which are not already in the template have to be created using vMAGIC. In the last step the intermediate format is transformed back to readable VHDL code, such that standard synthesis tools can process your design.
The idea of using Java to create VHDL might seem strange at first, but there are a number of advantages in using it:
- Powerful: Java delivers a powerful environment to easily create any kind of structure. Imagine a high performance, hard coded, pattern matching algorithm. Writing a generator which takes, e.g., a regular expression as an input will be much easier than hand coding a couple of these structures by hand
- Correctness: Once your well written generator produces correct designs, it will do so for all times and all designs. No more searching for silly errors in your cores
- Fast: Obviously it takes some time to implement a code generator, but once you have done that, generating new hardware is a matter of seconds
And there is more: we are currently working on a VHDL Linter based on vMAGIC, an XML input/output plugin and ultimately a powerful VHDL editor to give you all the features of the IDEs you know from the software realm. And all that for free under GPL/LGPL.
The project is hosted by sourceforge on http://vmagic.sf.net.
Update from OC-Team
This topic gives you an update of what has been "cooking" at the OpenCores community during the last month.
This month activities:
- Added OpenCores Certified Projects.
- No issues
Our message to the community:
- Please make sure that all design-files including documentation are stored the project SVN repository. The "Downloads" page is only meant for pictures or document that are intended to be visible on the "Overview"-page. No design-files are allowed on the "Downloads" page.
Here you will see interesting new projects that have reached the first stage of development.
High throughput and low area aes core
This core can reach more than 2 Gbps throughput.
Gate count is around 35k.
Development status: Stable
Mar 16, 2010: done
Mar 8, 2010: done!
Mar 6, 2010: All files are added
Mar 4, 2010: Updated some information
The project presents an open-source implementaion of the 512 bit RSA algorithm. This is a reduced version of a full FIPS Certified capable RSA Crypto-core.
Development status: Stable
Mar 9, 2010: Project uploaded
Mar 9, 2010: First import
The uTosNet framework (pronounced 'microTosNet') aims at providing a very fast method for interfacing physical components, such as motor drivers, ADCs, encoders, and similar, to applications on a PC. The framework is based on the Node-on-Chip architecture (link to paper coming).
Development status: Beta
Mar 19, 2010: Initial release uploaded
Feb 25, 2010: Initial description provided
Opensource eda design for reuse toolset.
Development status: Alpha
Feb 2, 2010: added description
Super-FPGA with time in architecture
By using time as a third dimension, the American company Tabula developed a FPGA with denser logic and memory, which provides up to four times more DSP performance. The company has not launched a single product, but has still managed to attract close to $150 million in venture capital.
For seven years, Tabula worked on its new FPGA architecture, which has now been presented publicly. Behind the company, which unlike many other unknown companies already has 100 employees and 80 patents, is the chief engineer, EDA Celebrity Steve Teig, inter alia, former research director at Cadence, and CEO, the Xilinx veteran Dennis Segers.
The chip can, thanks to the associated compiler, dynamically reconfigure the memory as well as logic in the GHz speed. A major point is that connections between logic blocks can be much shorter than the conventional two-dimensional FPGAs.
- Nearly 90 percent of the surface in an FPGA is used for wiring. It drives up the size and cost, and also limits the performance and opportunity for a design to achieve required timing. If you must reach a breakthrough capacity at reasonable costs it has to be more efficient wiring, and that's what we've done, says Steve Teig in a statement.
The architecture has been named to Spacetime, and the company now intends to develop circuits based on this architecture, named 3PLD, 3D Programmable Logic Devices. They will be made in the usual CMOS processes at 40 nm - the third dimension is not related to any kind of advanced construction in any direction, but relates to the time. In terms of silicon, these circuits as flat as others.
The company describes the architecture as a series of layers, or folds. Each folds execute a fraction of the desired function and stores the data locally. When some or all folds are reconfigured locally stored data are used to perform the next part of the function. By rapid restoration figure to execute different parts of each function, a Spacetime circuit may implement a complex design with significantly less resources than the normal 2-D circuits require.
With this architecture Tabula claim to reach 2.5 times higher logic density, twice as high memory density, nearly three times as many memory ports and up to 4 times more DSP performance.
A big benefit Tabula highlights is that designers can use its standard EDA tools. The proprietary compiler based on standard RTL code - no modification shall be required.
To get into the lucrative niche FPGA market is not done in a trice. Despite numerous attempts - almost 50 companies have made more or less the same approach here - it is still Xilinx and Altera, which almost share the market between themselves, with Lattice as distanced third. And Tabula are not alone on new architectures, young Achronix started production of its 1.5 GHz FPGA this past winter, and Silicon Blue that invests in low-power FPGAs has also recently launched circuits
Tabula has succeeded with its renowned investors. Close to $150 million have been invested in the company, from venture capitalists as Benchmark, NEA, Greylock and Cross Link. The ambition is to become industry leaders.
Besides Steve Teig and Dennis Segers are a number of other people in the company with a background from Xilinx, Altera and several EDA companies.
So far the company has just developed prototypes, which, however, shows that the architecture is up to standard. Tabula will not say when the first chips is to be launched on the market.
Published by Elektroniktidningen at www.etn.se/50838