Huatuo-Llama-Med-Chinese
Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调 (by SCIR-HI)
ChatDoctor
By Kent0n-Li
Huatuo-Llama-Med-Chinese | ChatDoctor | |
---|---|---|
1 | 3 | |
4,272 | 3,373 | |
- | - | |
6.9 | 7.6 | |
6 months ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Huatuo-Llama-Med-Chinese
Posts with mentions or reviews of Huatuo-Llama-Med-Chinese.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-09.
-
Local medical LLM
Huatuo-Llama-Med-Chinese https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese
ChatDoctor
Posts with mentions or reviews of ChatDoctor.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-09.
-
Local medical LLM
ChatDoctor https://github.com/Kent0n-Li/ChatDoctor
-
[D] Can we use instructions to include knowledge into LLMs?
I see ChatDoctor used real and simulated conversations to "include knowledge" in the LLM. For them, this worked. I would like to see more examples of this approach: https://github.com/Kent0n-Li/ChatDoctor They do share their fine tuning approach.
-
[R] Experience fine-tuning GPT3 on medical research papers
I think ChatDoctor has an interesting approach that could be useful for you: https://github.com/Kent0n-Li/ChatDoctor
What are some alternatives?
When comparing Huatuo-Llama-Med-Chinese and ChatDoctor you can also consider the following projects:
visual-med-alpaca - Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
DoctorGLM - 基于ChatGLM-6B的中文问诊模型
Cornucopia-LLaMA-Fin-Chinese - 聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)
SELM - Symmetric Encryption with Language Models
InternGPT - InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)