awesome-fsharp
candle
awesome-fsharp | candle | |
---|---|---|
4 | 17 | |
1,142 | 13,475 | |
0.4% | 4.4% | |
3.4 | 9.9 | |
3 months ago | 3 days ago | |
Rust | ||
Creative Commons Zero v1.0 Universal | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-fsharp
-
Should I Haskell or OCaml?
2. https://github.com/fsprojects/awesome-fsharp#data-science
-
Best resources for learning F# to write boring apps?
I found this list of resources and libraries for F# which should get you started if you're looking for a specific library, like one for Postgres.
-
What does it take to be proficient at something?
Yeah, it's not the most mainstream programming language, but despite that there are some interesting F# projects.
-
Writing high performance F# code
I'd suggest having a look through https://github.com/fsprojects/awesome-fsharp and look at the high starred items (though some are not strictly F# but just something F# can use)
maybe something like suave (backend web framework).
The bigger problem I found going down the F# route is F# libraries go dead. For long term projects, far better not to use any thirdpary F# libraries and just use pretty popular third party .net libs from the C# world or the core .net lib. These days I mostly just use C#. The advantages of F# are not that big compared to just writing C# with a similar coding mindset.
candle
-
karpathy/llm.c
Candle already exists[1], and it runs pretty well. Can use both CUDA and Metal backends (or just plain-old CPU).
[1] https://github.com/huggingface/candle
- Best alternative for python
-
Is there any LLM that can be installed with out python
Check out Candle! It's a Deep Learning framework for Rust. You can run LLMs in binaries.
-
Announcing Kalosm - an local first AI meta-framework for Rust
Kalosm is a meta-framework for AI written in Rust using candle. Kalosm supports local quantized large language models like Llama, Mistral, Phi-1.5, and Zephyr. It also supports other quantized models like Wuerstchen, Segment Anything, and Whisper. In addition to local models, Kalosm supports remote models like GPT-4 and ada embeddings.
-
RFC: candle-lora
I have been working on a machine learning library called candle-lora for Candle. It implementes a technique called LoRA (low rank adaptation), which allows you to reduce a model's trainable parameter count by wrapping and freezing old layers.
-
ExecuTorch: Enabling On-Device interference for embedded devices
[2] https://github.com/huggingface/candle/issues/313
-
[P] Open-source project to run locally LLMs in browser, such as Phi-1.5 for fully private inference
We provide full local inference in browser, by using libraries from Hugging Face like transformers.js or candle for WASM inference.
-
Update on the Candle ML framework.
We've first announced Candle, a minimalist ML framework in Rust 6 weeks ago. Since then we've focused on adding various recent models and improved the framework so as to support the necessary features in an efficient way. You can checkout a gallery of the examples, supported models include:
-
Should I Haskell or OCaml?
How did you select those two as your options?
I'm just a hobbyist that enjoys programming, and I eventually wanted to expand beyond python. I looked at Haskell and read Learn You a Haskell and did some Exercism exercises but never got anywhere close to being able to use it for real projects. Have been trying to learn about Lisp lately and feel like I've come to a similar dead end.
On the other hand, both Go and Rust have felt fulfilling and practical, with static typing and solid tooling, cross compilations, static binaries, and dependency management that is just a huge breath of fresh air coming from python.
The ML / data science scene is nowhere near as developed as in Python, and I still lean on jupyter/polars/PyTorch here, but I think the candle project[0] seems very interesting. Compiling whisper down to a single CUDA-leveraging binary for fast local transcription is pretty cool!
[0]: https://github.com/huggingface/candle
- Minimalist ML framework for Rust