Lottery_Ticket_Hypothesis-TensorFlow_2
ydata-synthetic
Lottery_Ticket_Hypothesis-TensorFlow_2 | ydata-synthetic | |
---|---|---|
6 | 60 | |
33 | 1,292 | |
- | 2.8% | |
4.1 | 7.3 | |
about 1 month ago | 6 days ago | |
Jupyter Notebook | Jupyter Notebook | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Lottery_Ticket_Hypothesis-TensorFlow_2
-
Freeze certain weights - TensorFlow 2
I have already implemented "The Lottery Ticket Hypothesis" by Frankle et al. using TensorFlow 2. You can refer to the code here. Here, a binary mask (0, 1) is used for element-wise multiplication to keep the number of pruned parameters constant because by default, when you apply gradient descent algorithm, then using the weight update rule, all of the weights are updated.
-
[R] Remove pruned connections
Some of my recent experiments in GitHub can be referred: Lottery Ticket Hypothesis implementation and Neural Network Pruning.
-
TensorFlow Lite: RuntimeError
I am using TensorFlow version: 2.3.0 and Python3. I am experimenting in Quantizing a pruned and trained Conv-2 CNN model. The model architecture is: conv -> conv -> max pool -> dense -> dense -> output for CIFAR-10. You can see the Jupyter-notebook here.
-
Iterative Pruning: LeNet-300-100 - PyTorch
The code can be accessed here
-
Neural Network Compression - Implementation benefits
here
-
ValueError: TensorFlow2 Input 0 is incompatible with layer model
True, removing he_normal initialization does increase the accuracy. For most of my previous experiments I have usually used the kernel initialization as mentioned in the different author's paper(s). Therefore for ResNet, I thought of using Kaiming He initialization as he is the author of the research paper. However, the default kernel initialization in TF2 is 'glorot_uniform' which leads to 60.04% val_accuracy.
ydata-synthetic
-
Coding Wonderland: Contribute to YData Profiling and YData Synthetic in this Advent of Code
Send us your North ⭐️: "On the first day of Christmas, my true contributor gave to me..." a star in my GitHub tree! 🎵 If you love these projects too, star ydata-profiling or ydata-synthetic and let your friends know why you love it so much!
- ydata-synthetic: NEW Data - star count:1083.0
-
I absolutely hate my internship
1: Try to work with what you have and augment your dataset (honestly, 10 points is crap)
-
Assessing the Quality of Synthetic Data with Data-Centric AI
Data Quality is key for all applications and models, and LLMs are no exception :) I've been working on a small community project with synthetic data (https://github.com/ydataai/ydata-synthetic) using ydata-synthetic, and it really shows! Underrepresentation (category imbalance) and missing data are two of the main issues!
-
SOMEBODY HELP ME!
The Data-Centric AI Community creates community projects from time to time and is probably willing to help you in your project.
-
Help for Data Scientist position
Join nice data communities and start networking.
-
How to become a beast in DS ?
You know what they say: "Tell me who your friends are, and I'll tell you who you are!". Hang out with DS beasts and learn from them :)
-
Hey guys, I have a few questions
Interesting question! I think our AI/ML devs at the Data-Centric AI Community could have nice perspectives for your to decide :)
-
Embarking on a Journey of 99 Data Science Projects - From Beginner to Expert
Sounds like an amazing journey! Feel free to add your projects on our awesome-python-for-data-science repo as you go! And in case you need a hand or feedback on the projects, we'll be happy to help at the Data-Centric AI Community.
-
Data science problems
The best to do is to get started with end-to-end projects in a collaborative environment (somewhat approaching real-world settings). You may find some interesting resources in this GitHub repository. The Data-Centric AI Community actually has a nice support system for this.
What are some alternatives?
labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
REaLTabFormer - A suite of auto-regressive and Seq2Seq (sequence-to-sequence) transformer models for tabular and relational synthetic data generation.
Neural_Network_Pruning - Implementations of different neural network pruning techniques
Copulas - A library to model multivariate data using copulas.
DeepRL-TensorFlow2 - 🐋 Simple implementations of various popular Deep Reinforcement Learning algorithms using TensorFlow2
Conditional-Sig-Wasserstein-GANs
pytorch-forecasting - Time series forecasting with PyTorch
gretel-python-client - The Gretel Python Client allows you to interact with the Gretel REST API.
Robotics-Object-Pose-Estimation - A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Spectrum - Spectrum is an AI that uses machine learning to generate Rap song lyrics
GLOM-TensorFlow - An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data
machine-learning-for-trading - Code for Machine Learning for Algorithmic Trading, 2nd edition.