snorkel
refinery
Our great sponsors
snorkel | refinery | |
---|---|---|
5 | 20 | |
5,685 | 1,353 | |
0.8% | 2.4% | |
5.5 | 4.6 | |
about 1 month ago | 14 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
snorkel
-
[P] We are building a curated list of open source tooling for data-centric AI workflows, looking for contributions.
The paid product came out of an open source tool: https://github.com/snorkel-team/snorkel
- [Discussion] - "data sourcing will be more important than model building in the era of foundational model fine-tuning"
-
Can't use load_data from utils
Actually, I referenced it in my issue as well. There seems to be different utils.py file in different folders under the snorkel-tutorials repo but the utils file we get after importing snorkel has a different [file](https://github.com/snorkel-team/snorkel/blob/master/snorkel/utils/core.py) ,i.e. the utils file is different in the main snorkel repo
- [D] A hand-picked selection of the best Python ML Libraries of 2021
refinery
-
[P] We are building a curated list of open source tooling for data-centric AI workflows, looking for contributions.
You definitely forgot https://www.kern.ai/ :)
-
GPT and BERT: A Comparison of Transformer Architectures
Get it for free here: https://github.com/code-kern-ai/refinery
-
Drastically decrease the size of your Docker application
Containers are amazing for building applications. Because they allow you to pack up a programm together with all it's dependencies and execute it wherever you like. That is why our application consists of 20+ individual containers, forming our data-centric IDE for NLP, which you can check out here: https://github.com/code-kern-ai/refinery.
-
Introducing bricks, an open-source content-library for NLP
Today we launched bricks, an open-source library which provides enrichments for your natural language processing projects. Our main goal with bricks is to shorten the amount of time that you need from idea to implementation. Bricks also seamlessly integrates into our main tool, the Kern AI refinery.
-
How to fine-tune your embeddings for better similarity search
This blog post will share our experience with fine-tuning sentence embeddings on a commonly available dataset using similarity learning. We additionally explore how this could benefit the labeling workflow in the Kern AI refinery. To understand this post, you should know what embeddings are and how they are generated. A rough idea of what fine-tuning is also helps. All the code and data referenced in this post is available on GitHub.
-
Vector Databases for Data-Centric AI (Part 2)
Shout out to both Kern.AI (an excellent open-source NLP labelling tool) https://github.com/code-kern-ai/refinery and Voxel51 (an excellent open-source Computer Vision analysis tool) https://github.com/voxel51/fiftyone for being early adopters of the technology in their platforms, but I don't believe either have yet made use of all of the value it can provide.
-
Hacker News top posts: Jul 18, 2022
Show HN: If VS Code had a data-centric IDE sibling, what would that look like?\ (23 comments)
-
Show HN: If VS Code had a data-centric IDE sibling, what would that look like?
Hi Ruben,
you can take a look at our architecture overview here: https://github.com/code-kern-ai/refinery#-architecture
A bit below it, you find a table with the links to all repositories. All of them are open-source. But thanks for the feedback, I'll try to make it a bit easier to understand! I appreciate that! :)
Hi Tom! Thanks, happy to hear that :)
We've focused on JSON as the user-specified data model. So you can upload anything fitting into a JSON. We're using pandas to process the uploaded data, so spreadsheets or CSV-ish also work.
We've got a public roadmap (https://github.com/code-kern-ai/refinery/projects/1), and we're looking forward to also integrate e.g. native PDF labeling sometime soon.
What are some alternatives?
skweak - skweak: A software toolkit for weak supervision applied to NLP tasks
argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
sqlx - 🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, and SQLite.
weasel - Weakly Supervised End-to-End Learning (NeurIPS 2021)
fiftyone - The open-source tool for building high-quality datasets and computer vision models
caer - High-performance Vision library in Python. Scale your research, not boilerplate.
dbs-tools - Perl tools to transform account / transaction data from DBS Bank into proper CSV
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
azuredatastudio - Azure Data Studio is a data management and development tool with connectivity to popular cloud and on-premises databases. Azure Data Studio supports Windows, macOS, and Linux, with immediate capability to connect to Azure SQL and SQL Server. Browse the extension library for more database support options including MySQL, PostgreSQL, and MongoDB.
snorkel-tutorials - A collection of tutorials for Snorkel
serde_postgres - Easily Deserialize Postgres rows.