gpt_academic
evalplus
gpt_academic | evalplus | |
---|---|---|
2 | 3 | |
57,516 | 881 | |
- | 9.8% | |
9.8 | 9.3 | |
2 days ago | 8 days ago | |
Python | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt_academic
-
Enhance Speed of AnkiBrain Addon
I recently managed to manually install the AnkiBrain addon, utilizing my personal ChatGPT API key. I'd like to extend my appreciation for creating such a useful tool. However, I've noticed a significant difference in speed compared to a local GUI, similar to what's offered by GPT Academic.
- GitHub - binary-husky/gpt_academic: 为GPT/GLM提供图形交互界面,特别优化论文阅读润色体验,模...GitHub - binary-husky/gpt_academic: 为GPT/GLM提供图形交互界面,特别优化论文阅读润色体验,模...
evalplus
-
The AI Reproducibility Crisis in GPT-3.5/GPT-4 Research
*Further Reading*:
- [GPT-4's decline over time (HackerNews)](https://news.ycombinator.com/item?id=36786407)
- [GPT-4 downgrade discussions (OpenAI Forums)](https://community.openai.com/t/gpt-4-has-been-severely-downg...)
- [Behavioral changes in ChatGPT (arXiv)](https://arxiv.org/abs/2307.09009)
- [Zero-Shot Replication Effort (Github)](https://github.com/emrgnt-cmplxty/zero-shot-replication)
- [Inconsistencies in GPT-4 HumanEval (Github)](https://github.com/evalplus/evalplus/issues/15)
- [Early experiments with GPT-4 (arXiv)](https://arxiv.org/abs/2303.12712)
- [GPT-4 Technical Report (arXiv)](https://arxiv.org/abs/2303.08774)
-
Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval!
❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code.
What are some alternatives?
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
llm_oracle - LLM Oracle is a GPT-4 powered tool for predicting future events. It's like a Magic 8 Ball that is able to perform basic research, calculations, and reasoning.
NExT-GPT - Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
zero-shot-replication
Baichuan-7B - A large-scale 7B pretraining language model developed by BaiChuan-Inc.
Baichuan-13B - A 13B large language model developed by Baichuan Intelligent Technology
chatgpt-plugin - CeylonAI: Streamlining chatbot plugin development with our open-source template project.
human-eval - Code for the paper "Evaluating Large Language Models Trained on Code"
slidev-theme-academic - Academic presentations with Slidev made simple 🎓
chatgpt_academic - 为GPT/GLM提供图形交互界面,特别优化论文阅读润色体验,模块化设计支持自定义快捷按钮&函数插件,支持代码块表格显示,Tex公式双显示,新增Python和C++项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持清华chatglm等本地模型 [Moved to: https://github.com/binary-husky/gpt_academic]
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath