circuit_training
Ax
circuit_training | Ax | |
---|---|---|
7 | 3 | |
687 | 2,277 | |
1.9% | 1.3% | |
6.9 | 9.8 | |
12 days ago | 6 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
circuit_training
-
The False Dawn: Reevaluating Google's RL for Chip Macro Placement
>> It is sad that you are providing a platform for someone's resentments.
The claims about independent replication refer to Google's circuit_training repository[1]. The UCSD team has conclusively shown this claim was materially false (see section 3 of their paper[2]).
BTW, Prof. Andrew Khang, who headed the UCSD effort, initially wrote an exteremely favorable editorial about the Nature paper[3].
[1] https://github.com/google-research/circuit_training
[2] https://arxiv.org/pdf/2302.11014.pdf
[3] https://www.nature.com/articles/d41586-021-01515-9
- Did recent AI events change your life plans?
-
Suggest some final year projects ideas for electronics engineering using RL
Depending on your current level and coding knowledge I would highly recommend to build on existing RL-platform such as e.g. Circuit-Training, and then potentially explore RL-aspects orthogonal to the original paper in your work. Examples could be adopting some of the recent work on more effective sample spaces, quantifying uncertainties in the design process with regards to the optimality of the design, or adding more a further degree of freedom to the framework.
- Circuit Training: An open-source RL framework for generating chip floor plans
-
Google fires another AI researcher who reportedly challenged findings
On the other hand, the research is open sourced here: https://github.com/google-research/circuit_training
The TF-Agents team replicated the RL training (with the corresponding teams' very deep collaboration) and open-sourced it here:
https://github.com/google-research/circuit_training
It pretty much gets the same results as found in the Nature paper.
The original codebase was heavily research-focused, used TF1, was impossible to run distributed training outside of Google's infra, and made it hard to try algorithms other than PPO. So it was reimplemented on top of TF2 and using some distributed training and collection technologies developed by the TF-Agents team at Google Brain and infra teams at DeepMind.
Everyone is welcome to poke at the training code and the model, and convince themselves that it does what it says on the box :)
- Circuit Training: A framework for generating chip floor plans with Deep RL
Ax
-
Using Large Language Models for Hyperparameter Optimization, Zhang et al. 2023 [GPT-4 is quite good at finding the optimal hyperparameters for machine learning tasks]
Why not use a Bayesian optimization framework like Ax instead? https://ax.dev/
- BoTorch – Bayesian Optimization in PyTorch
- Did recent AI events change your life plans?
What are some alternatives?
botorch - Bayesian optimization in PyTorch
optimas - Optimization at scale, powered by libEnsemble
vortex-auv - Software for guidance, navigation and control for the Vortex AUVs. Purpose built for competing in AUV/ROV competitions.