wrench
[NeurIPS 2021] WRENCH: Weak supeRvision bENCHmark (by JieyuZ2)
refinery
The data scientist's open-source choice to scale, assess and maintain natural language data. Treat training data like a software artifact. (by code-kern-ai)
wrench | refinery | |
---|---|---|
1 | 21 | |
223 | 1,408 | |
-0.4% | -0.1% | |
5.2 | 3.6 | |
12 months ago | about 2 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wrench
Posts with mentions or reviews of wrench.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-01-16.
-
[P] Open-source tool for building NLP training sets with weak supervision and search queries
WRENCH (NeurIPS 2021): https://github.com/JieyuZ2/wrench
refinery
Posts with mentions or reviews of refinery.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-03.
-
Ultimate guide to prompt engineering
Tools: Platforms like LangChain, Kern AI Refinery, and Langtail simplify testing, debugging, and optimizing prompts.
-
[P] We are building a curated list of open source tooling for data-centric AI workflows, looking for contributions.
You definitely forgot https://www.kern.ai/ :)
-
How we used AI to automate stock sentiment classification
We will build the web scraper in Kern AI workflow, labeled our news articles in refinery, and then enrich the data with gates AI. After that, we will use workflow again to send out the predictions and the enriched data via a webhook to Slack. If you'd like to follow along or explore these tools on your own, you can join our waitlist here: https://www.kern.ai/
-
German's NLP startup Kern AI has raised €2.7M in seed funding to accelerate its recent growth
A platform has been developed by the German startup Kern AI for NLP developers and data scientists to not only control the labeling process but also automate and orchestrate tangential tasks and enable them to address low-quality data that comes their way. Several companies exist substantively to power this labeling process.
-
Why and how we started Kern AI (our seed funding announcement)
Fast forward to July ‘22 (after many further product iterations and a full redesign), we open-sourced our product under a new name: Kern AI refinery (the origin of the name is very simple: we want to improve, i.e., refine, the foundation for building models).
-
GPT and BERT: A Comparison of Transformer Architectures
Get it for free here: https://github.com/code-kern-ai/refinery
- Open-source tool to label, assess and maintain natural language data. Treat training data like a software artifact!
-
Drastically decrease the size of your Docker application
Containers are amazing for building applications. Because they allow you to pack up a programm together with all it's dependencies and execute it wherever you like. That is why our application consists of 20+ individual containers, forming our data-centric IDE for NLP, which you can check out here: https://github.com/code-kern-ai/refinery.
-
Introducing bricks, an open-source content-library for NLP
Today we launched bricks, an open-source library which provides enrichments for your natural language processing projects. Our main goal with bricks is to shorten the amount of time that you need from idea to implementation. Bricks also seamlessly integrates into our main tool, the Kern AI refinery.
-
How to fine-tune your embeddings for better similarity search
This blog post will share our experience with fine-tuning sentence embeddings on a commonly available dataset using similarity learning. We additionally explore how this could benefit the labeling workflow in the Kern AI refinery. To understand this post, you should know what embeddings are and how they are generated. A rough idea of what fine-tuning is also helps. All the code and data referenced in this post is available on GitHub.