Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression. Learn more →
Top 23 Python Paper Projects
-
They did release GPT-2 under the MIT License.
-
transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Project mention: [D] Medium Article: Adaptive Learning for Time Series Forecasting | reddit.com/r/MachineLearning | 2022-10-02The src is available in https://github.com/jindongwang/transferlearning I'll also publish about how to code the model for time series
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
qlib
Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas to implementing productions. Qlib supports diverse machine learning modeling paradigms. including supervised learning, market dynamics modeling, and RL.
Project mention: qlib: NEW Other Models - star count:10947.0 | reddit.com/r/algoprojects | 2023-05-27 -
https://github.com/openai/jukebox The demo code is there.
-
-
Project mention: [D] How to Create a Fixed-Length, Binary, Sequence of Tokens Embedding? | reddit.com/r/MachineLearning | 2022-09-26
This reminds me a lot of the problem of predicting an integer-valued output for an image or audio sequence, where you want to predict a value between 0 and 255, or even say 65536, but you want to help the model understand that the result is categorical, but some categories are closer to each other. I learned recently that one approach to this used in Tacotron 2 (speech synthesis) is called a Mixture of Logistics. There is a not very good blog post that goes over it, but links to a very in-depth explanation in a github issue of all places.. might be interesting for you.
-
multiagent-particle-envs
Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Project mention: Why is Q-learning always presented in such a math-heavy fashion? I just spent an hour dissecting this formula with a student -- only to strongly suspect there is a typo. Are there any good Q-Learning tutorials out there that *explain* the math instead of dropping it from the sky? | reddit.com/r/learnmachinelearning | 2022-10-12 -
ONLYOFFICE
ONLYOFFICE Docs — document collaboration in your environment. Powerful document editing and collaboration in your app or environment. Ultimate security, API and 30+ ready connectors, SaaS or on-premises
-
-
maddpg
Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Project mention: How is the backward pass performed in MADDPG algorithm from MARL | dev.to | 2022-10-05I'm using the MADDPG algorithm from https://github.com/openai/maddpg/blob/master/maddpg/trainer/maddpg.py. I understood the forward pass for both the actor and critic networks. I'm not able to understand how the actor and critic networks are updates. Like at line 188 and 191 the authors compute the critic loss and actor loss. But can anyone explain how the critic and actor networks are updated. Also, as far as I understand, when the number of agents increases from 3 to 6 for a simple spread policy in MADDPG, the computation time for Q loss and P loss at lines 188 and 191 increase super-linearly. I'm assuming this might be because both the Q loss and P loss utilize the Q values and the dimension to calculate the Q values increases with the number of increasing linearly. It would be great if anyone can help me to understand this back propagation phase much better and why does the computation time grow super-linearly. I also put a time counter to track the computation time of Q loss and P loss for 60,000 episodes with simple spread policy (3 agents, 3 landmarks, 0 adversaries). Thanks for the help, in advance! **Q loss** 3 agents 74.31 sec 6 agents 243.31 sec (3X) **P loss** 3 agents 114.86 sec 6 agents 321.76 sec (3x)
-
Project mention: Pretrained Resnet50 for kidney detection (Kits19) | reddit.com/r/MLQuestions | 2023-01-11
You can find a pretrained Resnet, but probably not one that's been trained on a kidney object detection dataset. The only kidney CT dataset I know of is for segmentation, not object detection. So you'll have to convert the segmentations to bounding boxes and train your own. Take a look at monai.io for potential resources.
-
-
rpg_timelens
Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation
-
awesome-systematic-trading
A curated list of awesome libraries, packages, strategies, books, blogs, tutorials for systematic trading. (by edarchimbaud)
Project mention: awesome-systematic-trading: NEW Alternative Finance - star count:213.0 | reddit.com/r/algoprojects | 2023-05-13 -
-
-
Project mention: Minimalist way of managing academic papers? | reddit.com/r/commandline | 2022-10-05
-
-
deep-kernel-transfer
Official pytorch implementation of the paper "Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels" (NeurIPS 2020)
Project mention: What approach to take predicting a simple data stream? | reddit.com/r/neuralnetworks | 2022-10-03Interesting approach to small datasets. Here is an implementation I'll look at: https://github.com/BayesWatch/deep-kernel-transfer
-
-
tabular-dl-num-embeddings
(NeurIPS 2022) The official implementation of the paper "On Embeddings for Numerical Features in Tabular Deep Learning"
-
-
train-procgen
Code for the paper "Leveraging Procedural Generation to Benchmark Reinforcement Learning"
Project mention: Procgen environments "easy" vs "hard" difficulty - what are they? | reddit.com/r/reinforcementlearning | 2022-12-26Found relevant code at https://github.com/openai/train-procgen + all code implementations here
-
heinsen_routing
Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
Project mention: Unlimiformer: Long-Range Transformers with Unlimited Length Input | news.ycombinator.com | 2023-05-05After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
-
CodiumAI
TestGPT | Generating meaningful tests for busy devs. Get non-trivial tests (and trivial, too!) suggested right inside your IDE, so you can code smart, create more value, and stay confident when you push.
Python Paper related posts
- BING IS NOW THE DEFAULT SEARCH FOR CHATGPT
- Don Knuth Plays with ChatGPT
- Best model for music generation?
- Was frustriert euch an der Nutzung oder der Diskussion um KI?
- awesome-systematic-trading: NEW Alternative Finance - star count:213.0
- awesome-systematic-trading: NEW Alternative Finance - star count:213.0
- awesome-systematic-trading: NEW Alternative Finance - star count:213.0
-
A note from our sponsor - InfluxDB
www.influxdata.com | 28 May 2023
Index
What are some of the best open-source Paper projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | gpt-2 | 19,120 |
2 | transferlearning | 11,481 |
3 | qlib | 10,955 |
4 | jukebox | 6,914 |
5 | ALAE | 3,399 |
6 | Tacotron-2 | 2,157 |
7 | multiagent-particle-envs | 1,849 |
8 | FSL-Mate | 1,508 |
9 | maddpg | 1,267 |
10 | research-contributions | 707 |
11 | diffwave | 601 |
12 | rpg_timelens | 556 |
13 | awesome-systematic-trading | 455 |
14 | SingleViewReconstruction | 253 |
15 | wavegrad | 236 |
16 | pubs | 235 |
17 | efficient-attention | 225 |
18 | deep-kernel-transfer | 177 |
19 | cam_board | 176 |
20 | tabular-dl-num-embeddings | 164 |
21 | Efficient-VDVAE | 159 |
22 | train-procgen | 154 |
23 | heinsen_routing | 139 |