...I'm back to simulating erosion again...

This page summarizes the projects mentioned and recommended in the original post on /r/proceduralgeneration

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • SoilMachine

    Advanced, modular, coupled geomorpohology simulator for real-time procedural terrain generation.

  • Layered Terrain: (instead of a 2.5D heightmap, w.o. voxels) Lets me do different soil properties, multiple water tables and ground water content via porosity, natural aquifers, compression and sedimentary rock, etc. Basically the next step to a full geological layers through erosion sim. I already have a public WIP prototype.

  • SimpleHydrology

    Procedural Hydrology / River / Lake Simulation

  • This is all GPU based. Article here and source code here. The code has since evolved beyond the original article, but I haven't written an update yet.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • lily-pad

    Real-time two-dimensional fluid dynamics simulations in Processing. Initiated by Dr G D Weymouth:

  • As for CPU vs GPU, I would never presume to judge your preferences here. You are absolutely right that it is easier to write for the CPU, and moreover, it is much easier to write code that can't be massively parallel. If time or possibility are your limiting factor, CPU is definitely the way to go. In fact, I was recently looking at this neat project that simulates real time fluids with solid objects obstructing the flow. It is done on the CPU, and I have no idea if it would be possible on the GPU. I even started my own simulation as a parallel CPU algorithm. I did not port it until I had a very solid understanding of what I was doing, but I will say the performance increase was truly astounding. I went from managing 200-300 iterations in 20 seconds on 12 SMT CPU cores, to processing 10,000 iterations with 16 times the data in just a few seconds. So if you have something that can be ported to the GPU, there is a lot to gain from it. This kind of performance boost can make impractical applications practical.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts