-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
My solution to this problem so far has been to build my own bus infrastructure. So far, I have most of the pieces in place for both Wishbone pipeline and AXI4. I have bridges from Wishbone classic to Wishbone pipeline, from Wishbone pipeline to and from AXI4 or AXI4lite, from AXI4 to AXI4-lite and back again, and from AXI3 to AXI4 (just not AXI4 to AXI3--yet). In my last design with Qsys, I tested and verified my design using Wishbone, then stuffed a formally verified bridge in place to get the design to work with Avalon. This had a couple problems: 1) With every bridge, you add latency. On a Cyclone V, that also means you lose throughput since Qsys only issues one request at a time. Worse, I lost so much throughput the design didn't meet customer expectations. (Oops!) 2) My "formally verified" Avalon to Wishbone bridge didn't work at first, causing the design to lock up on the first write access. 3) I messed up putting the bus together, and connected some of the wrong wires together, etc. The result was that, when composing "formally verified" cores together, the result was no longer verified. I suppose the good news is that, having made all the mistakes, I've learned from them and I'm still standing.
That solution isn't all that satisfying to me, so ... I'm trying to do better. My next attempt is going to be 1) using the ZipCPU instead of the ARM (at least for simulation, and certainly instead of a BFM), 2) using AXI instead of Wishbone (Yes, the ZipCPU can now speak either Wishbone or (mostly) AXI), using my own AXI infrastructure (to get rid of the bridges), and using AutoFPGA to compose the design together and handle addressing requirements (instead of Qsys).