mandala
determined
mandala | determined | |
---|---|---|
8 | 10 | |
228 | 2,868 | |
- | 2.5% | |
6.3 | 9.9 | |
about 2 months ago | 2 days ago | |
Python | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mandala
-
Mandala: A little plaground for testing pixel logic patterns
I was so confused, expecting this to be some trickery related to the computational-graph-memoization-and-exploration tool "mandala" https://github.com/amakelov/mandala
- Mandala: Notebook memoization on steroids, used by Anthropic
-
Improve Jupyter Notebook Reruns by Caching Cells
This is neat and self-contained! But as someone running experiments with a high degree of interactivity, I often have an orthogonal requirement: add more computations to the same cell without recomputing previous computations done in the cell (or in other cells).
For a concrete example, often in an ML project you want to study how several quantities vary across several parameters. A straightforward workflow for this is: write some nested loops, collect results in python dictionaries, finally put everything together in a dataframe and compare (by plotting or otherwise).
However, after looking at the results, maybe you spot some trend and wonder if it will continue if you tweak one of the parameters by using a new value for it; of course, you also want to look at the previous values and bring everything together in the same plot(s). You now have a problem: either re-run the cell (thus losing previous work, which is annoying even if you have to wait 1 minute - you know it's a wasted minute!), or write the new computation in a new cell, possibly with a lot of redundancy (which over time makes the notebook hard to navigate and keep consistent).
So, this and other considerations eventually convinced me that the function is more natural than the cell as an interface/boundary at which caching should be implemented, at least for my use cases (coming from ML research). I wrote a framework based on this idea, with lots of other features (some quite experimental/unusual) to turn this into a feasible experiment management tool - check it out at https://github.com/amakelov/mandala
P.S.: I notice you use `pickle` for the hashing - `joblib.dump` is faster with objects containing numpy arrays, which covers a lot of useful ML things
-
ML Experiments Management with Git
Another option, that manages versioning of your computational graph and its results and provides extremely elegant query-able memoization is Mandala https://github.com/amakelov/mandala
It is a much simpler and much more magical piece of software that truly expanded how I think about writing, exploring, and experimenting with code. Even if you never use it, you probably would really enjoy reading the blog posts the author wrote about the design of the tool https://amakelov.github.io/blog/pl/
-
Snakemake – A framework for reproducible data analysis
You might like mandala (https://github.com/amakelov/mandala) - it is not a build recipe tool, rather it is a tool that tracks the history of how your builds / computational graph has changed, and ties it to how the data looked like at each such step.
-
Piper: A proposal for a graphy pipe-based build system
u/rust4yy: I've been building mandala, a Python framework for (among other things) incremental computing. One way to think of it is "a build system for Python objects", except the units of computation are Python functions.
determined
-
Open Source Advent Fun Wraps Up!
17. Determined AI | Github | tutorial
-
ML Experiments Management with Git
Use Determined if you want a nice UI https://github.com/determined-ai/determined#readme
- Determined: Deep Learning Training Platform
-
Queueing/Resource Management Solutions for Self Hosted Workstation?
I looked up and found [Determined Platform](determined.ai), tho it looks a very young project that I don't know if it's reliable enough.
-
Ask HN: Who is hiring? (June 2022)
- Developer Support Engineer (~1/3 client facing, triaging feature requests and bug reports, etc; 2/3 debugging/troubleshooting)
We are developing enterprise grade artificial intelligence products/services for AI engineering teams and fortune 500 companies and need more software devs to fill the increasing demand.
Find out more at https://determined.ai/. If AI piques your curiosity or you want to interface with highly skilled engineers in the community, apply within (search "determined ai" at careers.hpe.com and drop me a message at asnell AT hpe PERIOD com).
-
How to train large deep learning models as a startup
Check out Determined https://github.com/determined-ai/determined to help manage this kind of work at scale: Determined leverages Horovod under the hood, automatically manages cloud resources and can get you up on spot instances, T4's, etc. and will work on your local cluster as well. Gives you additional features like experiment management, scheduling, profiling, model registry, advanced hyperparameter tuning, etc.
Full disclosure: I'm a founder of the project.
-
[D] managing compute for long running ML training jobs
These are some of the problems we are trying to solve with the Determined training platform. Determined can be run with or without k8s - the k8s version inherits some of the scheduling problems of k8s, but the non-k8s version uses a custom gang scheduler designed for large scale ML training. Determined offers a priority scheduler that allows smaller jobs to run while being able to schedule a large distributed job whenever you need, by setting a higher priority.
-
Cerebras’ New Monster AI Chip Adds 1.4T Transistors
Ah I see - I think we're pretty much on the same page in terms of timetables. Although if you include TPU, I think it's fair to say that custom accelerators are already a moderate success.
Updated my profile. I've been working on DL training platforms and distributed training benchmarking for a bit so I've gotten a nice view into the GPU/TPU battle.
Shameless plug: you should check out the open-source training platform we are building, Determined[1]. One of the goals is to take our hard-earned expertise on training infrastructure and build a tool where people don't need to have that infrastructure expertise. We don't support TPUs, partially because a lack of demand/TPU availability, and partially because our PyTorch TPU experiments were so unimpressive.
[1] GH: https://github.com/determined-ai/determined, Slack: https://join.slack.com/t/determined-community/shared_invite/...
-
[D] Software stack to replicate Azure ML / Google Auto ML on premise
Take a look at Determined https://github.com/determined-ai/determined
-
AWS open source news and updates No.41
determined is an open-source deep learning training platform that makes building models fast and easy. This project provides a CloudFormation template to bootstrap you into AWS and then has a number of tutorials covering how to manage your data, train and then deploy inference endpoints. If you are looking to explore more open source machine learning projects, then check this one out.
What are some alternatives?
oxen-release - Lightning fast data version control system for structured and unstructured machine learning datasets. We aim to make versioning datasets as easy as versioning code.
ColossalAI - Making large AI models cheaper, faster and more accessible
snakemake-wrappers - This is the development home of the Snakemake wrapper repository, see
Dagger.jl - A framework for out-of-core and parallel execution
beaver - Simple, but capable build system and command runner for any project
aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
cfn-diagram - CLI tool to visualise CloudFormation/SAM/CDK stacks as visjs networks, draw.io or ascii-art diagrams.
sdk - Metadata store for Production ML
goofys - a high-performance, POSIX-ish Amazon S3 file system written in Go
make-booster - Utility routines to simplify using GNU make and Python
alpa - Training and serving large-scale neural networks with auto parallelization.