sqllineage
LLM-Finetuning-Hub
sqllineage | LLM-Finetuning-Hub | |
---|---|---|
3 | 6 | |
1,135 | 638 | |
- | - | |
8.6 | 9.5 | |
1 day ago | 27 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqllineage
- FLaNK Stack Weekly for 12 September 2023
-
Dependency Lineage & Scripting
For the open source there is this library https://github.com/reata/sqllineage.
-
Launch HN: Elementary (YC W22) – Open-source data observability
Is the idea here that it's inspired by re_data due to using dbt transformations underneath or because it's reposted looking nearly the same? (or both?)
Looks like much of the lineage code is also largely a wrapper around this library: https://github.com/reata/sqllineage
Would be curious to understand the project's purpose and unique contributions vs. the underlying dependencies powering it as there seems to be some ambiguity. Is this just a wrapper around dbt transformations and a lineage library in one package? Can I just use them directly?
LLM-Finetuning-Hub
-
Zephyr-7B QLoRA Benchmark for Summarization and Classification
Hi everyone, we've been working on benchmarking different open-source LLMs. We measure, in particular, on the performance of these models once finetued (via QLoRA) on classic NLP downstream tasks like summarization and classification. We also put particular emphasis on benchmarking inference time/cost for these models once deployed.
-
Show HN: Finetuning LLMs: Open-source vs. Close-source
Hello all,
I have been working on benchmarking different LLMs -- both open-source and closed-source.
Repo: https://github.com/georgian-io/LLM-Finetuning-Hub
Precisely, I am comparing their out-of-the-box capabilities (prompting) and their fine-tuned conterparts!
So far, the following models have been benchmarked:
Open-Source:
- FLaNK Stack Weekly for 12 September 2023
- [P][R] Finetune LLMs via the Finetuning Hub
-
Show HN: Leverage Falcon 7B blog post
- Finetuning with QLoRA
I evaluate how Falcon does on classification tasks when compared to Bert and Distilbert.
Moreover, I talk about different ways you can deploy the model, and the associated costs!
The code for all of my experiments are available on: https://github.com/georgian-io/LLM-Finetuning-Hub
Happy reading and learning!
- Show HN: LLM Finetuning Hub
What are some alternatives?
elementary - The dbt-native data observability solution for data & analytics engineers. Monitor your data pipelines in minutes. Available as self-hosted or cloud service with premium features.
ChatDev - Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
re_data - re_data - fix data issues before your users & CEO would discover them 😊
bedframe - Your Browser Extension Development Framework
dbt-data-reliability - dbt package that is part of Elementary, the dbt-native data observability solution for data & analytics engineers. Monitor your data pipelines in minutes. Available as self-hosted or cloud service with premium features.
wasmer-java - ☕ WebAssembly runtime for Java
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
llm-toys - Small(7B and below) finetuned LLMs for a diverse set of useful tasks
hrequests - 🚀 Web scraping for humans
go-llama2 - Llama 2 inference in one file of pure Go
open-interpreter - A natural language interface for computers
rivet - The open-source visual AI programming environment and TypeScript library