|12 months ago||3 days ago|
|MIT License||GNU General Public License v3.0 or later|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
We haven't tracked posts mentioning postgres-word2vec yet.
Tracking mentions began in Dec 2020.
TimescaleDB 2.7 vs. PostgreSQL 14
4 projects | news.ycombinator.com | 22 Sep 2022
What tools should I use to gather custom metrics about my Django application?
4 projects | reddit.com/r/django | 8 Sep 2022
We use Grafana on top of Timescale for metric analytics. It's working pretty well -- we're doing a few dozen events a second.
How we made data aggregation on PostgreSQL better and faster
4 projects | news.ycombinator.com | 21 Jun 2022
(NB - blog author/Timescale employee)
One thing we're improving as we move forward in documentation and other areas is explaining why doing joins (and things like window functions) is difficult in continuous aggregates and not the current focus. Honestly, it's part of the reason most databases haven't tackled this problem before.
Once you add in joins or things that might refer to data outside of the refresh window (LAG values for example), things get really complicated. For instance, if you join to a dimension table and a piece of metadata changes, does that change now need to be updated and reflected back in all of this historical aggregate data that's outside of the automatic refresh policy? Same with a window function - if data within a window hasn't changed but data that *might* be hit because of the window function reference does change, continuous aggregates would have to know about that for each query and track those changes too.
I'm not saying it's impossible or that it won't be solved someday, but the functionality with continuous aggregates that keeps the aggregate data updated automatically (without losing any history) *and* being able to perform fast joins on the finalized data is a very useful step that's not available anywhere else within the Postgres ecosystem.
RE: CAGG on top of a CAGG - you're certainly not the only person to request this () and we understand that. Part of this is because of what I discussed above (tracking changes across multiple tables), although having finalized data might make this more possible in the future.
That said (!!!), the cool thing is that we already *have* begun to solve this problem with hyperfunction aggregates and 2-step aggregation, something I showed in the blog post. So, if your dataset can benefit from one of the hyperfunction aggregates that we currently provide, there are lots of cool things you can do with it, including rollups into bigger buckets without creating a second continuous aggregate! If you haven't checked them out, please do! 
Ingesting an S3 file into an RDS PostgreSQL table
3 projects | dev.to | 10 Jun 2022
either we go for RDS, but we stick to the AWS handpicked extensions (exit timescale, citus or their columnar storage, ... ),
SyMon - System monitoring/alerting tool written in Go
2 projects | reddit.com/r/golang | 6 Jun 2022
It could make sense to switch to a timescales db which could store and query timescales much more efficient than mariadb. Of course, for the indented small use-cases mariadb could still be more than enough. For postgresql, there would be the https://github.com/timescale/timescaledb extension.
DeWitt Clause, or Can You Benchmark %DATABASE% and Get Away With It
21 projects | dev.to | 2 Jun 2022
Hooks: The secret feature powering the Postgres ecosystem
4 projects | dev.to | 11 Mar 2022
The most important hook for Timescale is probably the planner_hook which deals with the query plan that Postgres produces when a user or application sends it a SQL statement. In src/planner.c we see how they use the function timescaledb_planner to modify the typical query plan to include chunks and ensure that HyperTables are queried correctly. Once the planner has completed Timescale does further transformations on the query using the post_parse_analyze_hook that they initialized at the very start of Postgres operation in src/loader/loader.c.
Show HN: Interactive 3D Visualization of the Shared Mobility Traffic in Berlin
4 projects | news.ycombinator.com | 4 Mar 2022
Over the last year, I crawled and stored the position of all available bicycles of a large shared mobility provider in Berlin, Germany once a minute (~2.1 billion data points). Subsequently, I calculated 713,562 trip routes as they were likely taken by customers of the provider. The web app linked above provides more background information and visualizes some patterns I find interesting.
All 3D map layers are implemented with deck.gl (https://deck.gl) and projected on a MapLibre base map (https://maplibre.org). The base map uses self-hosted OpenMapTiles vector tiles (https://openmaptiles.org). I store all data inside a Timescale/PostgreSQL database (https://www.timescale.com) and perform most transformations via standard SQL. All of this is self-hosted on my bare-metal server at Hetzner (https://www.hetzner.com).
The source code and a short demo video is available on GitHub: https://github.com/laurids-reichardt/berlin-shared-mobility-...
I’d love to receive feedback and answer your questions!
DISCLAIMER: Searching for collaboration opportunities for my master thesis.
I'm currently completing my master’s degree at HTW Berlin and would like to collaborate with an innovative company on an exciting problem or business case for my master thesis. I’m open to both remote work or on-site in Berlin.
If you could imagine providing an exciting opportunity in the fields of data engineering or data analytics, or have a tip who might, I'd love to get in touch with you via any of the contacts listed below.
Alternatively, leave a comment with your e-mail or contacts, and I’ll send you a message.
E-Mail: [email protected]
Timescale raises $110M Series C
8 projects | news.ycombinator.com | 22 Feb 2022
Hi! So the team is over 100 at this point, but engineering effort is spread across multiple products at this point.
The core timescaledb repo  has 10-15 primary engineers (although we are aggressively hiring for database internal engineers), with a few others working on DB hyperfunctions and our function pipelining  in a separate extension . I think generally the set of folks who contribute to low-level database internals in C is just smaller than other type of projects.
We also have our promscale product , which is our observability backend powered by SQL & TimescaleDB.
And then there is Timescale Cloud, which is obviously a large engineering effort (most of which does not happen in public repos).
And we are hiring. Fully remote & global.8 projects | news.ycombinator.com | 22 Feb 2022
I appreciate the response! Here is the github issue https://github.com/timescale/timescaledb/issues/1446
What are some alternatives?
ClickHouse - ClickHouse® is a free analytics DBMS for big data
promscale - Promscale is a unified metric and trace observability backend for Prometheus, Jaeger and OpenTelemetry built on PostgreSQL and TimescaleDB.
TDengine - TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, Industrial IoT and DevOps.
GORM - The fantastic ORM library for Golang, aims to be developer friendly
temporal_tables - Temporal Tables PostgreSQL Extension
Telegraf - The plugin-driven server agent for collecting & reporting metrics.
postgrest - REST API for any Postgres database
pgbouncer - lightweight connection pooler for PostgreSQL
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
tsbs - Time Series Benchmark Suite, a tool for comparing and evaluating databases for time series data
dolt - Dolt – It's Git for Data
clickhouse_fdw - ClickHouse FDW for PostgreSQL