dotenv-vault
mlc-llm
dotenv-vault | mlc-llm | |
---|---|---|
9 | 89 | |
1,024 | 17,150 | |
2.9% | 4.3% | |
8.6 | 9.9 | |
3 months ago | 5 days ago | |
TypeScript | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dotenv-vault
-
Node.js 20.6 adds built-in support for .env files
dotenv-vault is another popular package that lets you encrypt your secret and decrypt the file just in time. They are quite helpful for production and CIT environments but are not supported currently.
- FLaNK 04 March 2024
- Dotenv-vault: a CLI to sync .env files across machines, envs, and team members
-
SecureStore VS dotenv-vault - a user suggested alternative
2 projects | 4 Nov 2023
A secrets manager for .env and .env.vault files. Sync your secrets across teams, machines, and environments.
-
Show HN: Shello – Wrangle Environment Variables
A secrets manager for .env and .env.vault files. Sync your secrets across teams, machines, and environments.
-
appsettings.json secrets for local and for deployments
A secrets manager for .env and .env.vault files. Sync your secrets across teams, machines, and environments.
-
What are the best ways to prevent writing secrets in the code.
A secrets manager for .env and .env.vault files. Sync your secrets across teams, machines, and environments.
- Show HN: Dotenv-vault – Sync your .env files, quickly and securely
-
Adding URL Search Parameters to Imports?
Works with dotenv-vault. Learn more at dotenv.org.
mlc-llm
- FLaNK 04 March 2024
-
Ai on a android phone?
This one uses gpu, it doesn't support Mistral yet: https://github.com/mlc-ai/mlc-llm
-
MLC vs llama.cpp
I have tried running mistral 7B with MLC on my m1 metal. And it kept crushing (git issue with description). Memory inefficiency problems.
-
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget
Project: https://github.com/mlc-ai/mlc-llm
- Scaling LLama2-70B with Multi Nvidia/AMD GPU
-
AMD May Get Across the CUDA Moat
For LLM inference, a shoutout to MLC LLM, which runs LLM models on basically any API that's widely available: https://github.com/mlc-ai/mlc-llm
-
ROCm Is AMD's #1 Priority, Executive Says
One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...
The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help
Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.
My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus
I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787
From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
-
Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
you already have TVM for the cross platform stuff
see https://tvm.apache.org/docs/how_to/deploy/android.html
or https://octoml.ai/blog/using-swift-and-apache-tvm-to-develop...
or https://github.com/mlc-ai/mlc-llm
- Ask HN: Are you training and running custom LLMs and how are you doing it?
What are some alternatives?
infisical - ♾ Infisical is the open-source secret management platform: Sync secrets across your team/infrastructure and prevent secret leaks.
llama.cpp - LLM inference in C/C++
env-manager - Garnet is a developer-friendly, open-source tool for managing environment variables and secrets.
ggml - Tensor library for machine learning
envars - Securely load environment variables (configuration settings) from .env files with support of Google Secret Manager.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
dotenv - Loads environment variables from .env for nodejs projects.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
dotenvLICENSE
llama-cpp-python - Python bindings for llama.cpp
typedotenv - dotenv utility for TypeScript
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.