trex
safe-rlhf
trex | safe-rlhf | |
---|---|---|
3 | 1 | |
238 | 1,160 | |
0.4% | 4.5% | |
6.6 | 8.1 | |
8 months ago | 22 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trex
-
Show HN: Generate JSON mock data for testing/initial app development
A friend of mine built a tool called Trex that you might find helpful, check it out here: https://github.com/automorphic-ai/trex
It's very consistent at generating templated data.
- Intelligently transform unstructured to structured output (JSON, Regex, CFG)
safe-rlhf
What are some alternatives?
PentestGPT - A GPT-empowered penetration testing tool
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
graph-of-thoughts - Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
sycamore - 🍁 Sycamore is an LLM-powered search and analytics platform for unstructured data.
AtomGPT - 中英文预训练大模型,目标与ChatGPT的水平一致
autolabel - Label, clean and enrich text datasets with LLMs.
opening-up-chatgpt.github.io - Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
ray-llm - RayLLM - LLMs on Ray
JSON-Schema Faker - JSON-Schema + fake data generators
h2o-wizardlm - Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning