OpenCores

Project maintainers

Details

Name: stereo_vision_core
Created: Nov 2, 2024
Updated: Nov 4, 2024
SVN Updated: Nov 4, 2024
SVN: Browse
Latest version: download (might take a bit to start...)
Statistics: View
Bugs: 0 reported / 0 solved
Star1you like it: star it!

Other project properties

Category:Coprocessor
Language:VHDL
Development status:Stable
Additional info:Design done, FPGA proven
WishBone compliant: No
WishBone version: n/a
License: LGPL

Stereo Vision Core

This is an independent implementation of an Stereo Vision Core accelerator following the architecture previously published by Wade S. Fife. Here you will find the hardware architecture followed by our implementation, which follows an stream processing computation approach as described by Donald G. Bailey. The accelerator architecture uses Census Transform (CT) + sum of hamming distance (SHD). In detail, the implemented accelerator uses Census Transform with (50%) sparcity.

It is worth noting that the original RTL description was developed on VHDL back in 2016 as part of my Master's thesis (You can check it here the Spanish version, there is no English version yet). Now, this version also includes additional scripts that convert the VHDL code into Verilog using Yosys with the Ghdl plugin. The design is fully parametrizable and synthesizable. The accelerator has been implemented and evaluated on FPGAs, but such deployment is not part of this project.

We prepared the project with some scripts that automated the simulation setup using Modelsim. The following steps are required in order to simulate the accelerator.

link to the repository on GitHub: https://github.com/divadnauj-GB/stereo_vision_core

System Requierements

  • Ubuntu >=20.04
  • Python >=3.6
  • Modelsim or Questasim
  • OSS CAD Suite (Yosys and Ghdl)

How to use this repository

1. Get the design files

# Clone the repository
git clone https://github.com/divadnauj-GB/stereo_vision_core.git
cd stereo_vision_core

2. Run the simulation

There are two ways of simulating the accelerator. The first one simulates the original RTL description from the VHDL design files. The second option automatically converts the VHDL design into one RTL design file in Verilog using Yosys-ghdl plugging for Yosys (we created this script for that purpose); this new verilog file is then simulated using the same evaluation test-bench.

For VHDL simulation, you need to execute the script run_stereo_simulation.py

python3 run_stereo_simulation.py

For Verilog conversion and Simulation you need to execute the script run_stereo_simulation_verilog.py

python3 run_stereo_simulation_verilog.py

3. Results visualization

After executing the simulation scripts, you need to wait some time to get the accelerator results. It is expected that the VHDL simulation takes around 3 minutes and the verilog simulation takes around 30 minutes, these are results obtained from a server with 256 cores and 128GB RAM. The verilog simulation takes significantly more time because during the conversion with yosys the original VHDL file is elaborated into basic units (i.e., regs, mux, adders, mults etc) significantly increasing the amount of objects required to simulate in comparison with the original VHDL description that contains several components in behavioural descriptions.

When the simulation ends, you will obtain a new image called Disparity_map.png, which shows the accelerator results. This image is converted into a grayscale format so that the lighter colors represent objects closer to the cameras, and darker colors belong to objects located further in the scene or undefined objects.

ImageLeft ImageRight ImageOutput Result in grayscale
TsukubaLeftimgrightimDisparity_map
ConesLeftimgrightimDisparity_map
TeddyLeftimgrightimDisparity_map

The url of the svn repository is: https://opencores.org/websvn/listing/stereo_vision_core/stereo_vision_core