data-drift
ai-pr-reviewer
data-drift | ai-pr-reviewer | |
---|---|---|
7 | 40 | |
301 | 1,288 | |
3.0% | - | |
9.5 | 8.9 | |
3 months ago | 3 months ago | |
HTML | TypeScript | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data-drift
-
Open-Source Observability for the Semantic Layer
Think of Datadrift as a simple & open-source Monte Carlo for the semantic layer era. The repo is at https://github.com/data-drift/data-drift
Datadrift started as an internal tool built at our former company, a large European B2B Fintech. We had data reliability challenges impacting key metrics used for financial and regulatory reporting.
However, when we tried existing data quality tools we where always frustrated. They provide row-level static testing (eg. uniqueness or nullness) which does not address time-varying metrics like revenues. And commercial observability solutions costs $manyK a month and brings compliance and security overhead.
We designed Datadrift to solve these problems. Datadrift works by simply adding a monitor where your metric is computed. It then understands how your metric is computed and on which upstream tables it depends. When an issue occurs, it pinpoints exactly which rows have been updated and introducing the change.
You can also set up alerting and customise it. For example, you can decide to open and assign an Github issue to the analyst owning the revenue metric when a +10% change is detected. We tried to make it easy to customise and developer friendly.
We are thinking of adding features around root cause analysis automation/issues pattern analysis to help data teams improve metrics quality overtime. We’d love to hear your feature requests.
Datadrift is built with Python and Go, and licensed under GPL. Our docs are here: https://github.com/data-drift/data-drift?tab=readme-ov-file#...
Dev set up and demo : https://app.claap.io/sammyt/drift-db-demo-a18-c-ApwBh9kt4p-0...
We’re very eager to get your feedback!
-
Would learn Go to contribute to an OS project ? Or should I stick to python ?
I have already started working on it, I started in Go for some part, but I needed python to deploy a Pypi lib. Now its hybrid, and I prefer working with go 😬 but the most rational thinking leads to python.
-
Ask HN: Dear startup founders, what have you developed in-house?
We used static testing framework like great expectations but that was not enough. We did not have the budget for the big data observability players like Monte Carlo, so we kept it simple.
Repo if interested: https://github.com/data-drift/data-drift
(Disclaimer: I am focusing full time on this project to see if it's an interesting business opportunity. It's 100% open-source -- feedback welcome!)
-
Show HN: Lineage X Snapshot Tooling
https://app.data-drift.io/42527392/Lucasdvrs/dbt-datagit/ove...
You can "technically" install it by yourself, but tbh our focus are on the features, not the adoption. If you are interested it takes roughly 1 hour to configure (choose the data you want to observe, run a python function, install a Github app, add a configuration file), contact us.
The repo: https://github.com/data-drift/data-drift
Roast me
- Non-moving data is a journey
- “Non moving data” is like “Bug free”, it's a lie
ai-pr-reviewer
-
How CodeRabbit AI is Revolutionizing Coding with Intelligent Automation
According to their official site, CodeRabbit is an AI-based code reviewer and summarizer for GitHub pull requests, utilizing OpenAI's gpt-3.5-turbo and gpt-4 models. It is designed to be used as a GitHub Action and can be configured to run on every pull request and review comments.
- CodeRabbit – The AI-First Code Reviewer
-
Mastering Code Review skills using AI tools
After completing your review, employ automated AI-based code review tools such as CodeRabbit. These tools can quickly analyze code for common issues, style inconsistencies, and potential bugs, providing an immediate second opinion on the PR.
-
How we managed GPT-4 API cost at scale
Since its inception, CodeRabbit has experienced steady growth in its user base, comprising developers and organizations. Installed on thousands of repositories, CodeRabbit reviews several thousand pull requests (PRs) daily.
- Show HN: AI driven code reviewer (Free for OSS)
-
[P] Link related issues in PR automatically
No need to link issues in a PR now, the CodeRabbit AI code review bot can find the relevant issues and linking those with the PR.
-
Ask HN: Dear startup founders, what have you developed in-house?
FluxNinja [0] founder here. I developed an in-house AI-based code review tool [1] that CodeRabbit is now commercializing [2].
I did it because of the increasing frustration due to the time-consuming, manual code review process. We tried several techniques to improve velocity - e.g., stacked pull requests, but the AI tool helped the most.
[0]: https://www.fluxninja.com
[1]: https://github.com/coderabbitai/ai-pr-reviewer
[2]: https://coderabbit.ai
- CodeRabbit: AI based MR reviewer
-
CodeRabbit(AI Powered Code Reviewer) is now available for GitLab Merge Requests
Our Base Prompts are open-sourced and have gained decent traction. Please check out us - https://github.com/coderabbitai/ai-pr-reviewer
-
Recursively Summarizing Enables Long-Term Dialogue Memory in LLMs
We have been doing this at CodeRabbit[0] for incrementally reviewing PRs and allowing conversations in the context of code changes, giving the impression that the bot has much more context than it has.
For each commit, we summarize diff for each file. Then we create a summary of summaries which is incrementally updated as further commits are made on a pull request. This summary of summaries is saved, hidden inside a comment on a pull request, and is used while reviewing each file and answering the user's queries.
Some of our code is in the open source. Here is the link to the relevant prompt for recursive summarization - https://github.com/coderabbitai/ai-pr-reviewer/blob/main/src...
[0]: coderabbit.ai
What are some alternatives?
lakeFS - lakeFS - Data version control for your data lake | Git for data
ChatGPT-Prompts - ChatGPT and Bing AI prompt curation
soda-core - :zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io
aperture - Rate limiting, caching, and request prioritization for modern workloads
lightdash - Self-serve BI to 10x your data team ⚡️
awesome-chatgpt-prompts - This repo includes ChatGPT prompt curation to use ChatGPT better.
tellery - Tellery lets you build metrics using SQL and bring them to your team. As easy as using a document. As powerful as a data modeling tool.
tree-of-thought-puzzle-solver - The Tree of Thoughts (ToT) framework for solving complex reasoning tasks using LLMs
OpenMetadata - Open Standard for Metadata. A Single place to Discover, Collaborate and Get your data right.
mask-json-field-transform
fullnamematchscore-go - Generates a match score of two person names from 0-100, where 100 is the highest, on how closely two individual full names match. The scoring is based on a series of tests, algorithms, AI, and an ever-growing body of Machine Learning-based generated knowledge
Funnel-Transformer