differential-privacy-library
transformers
differential-privacy-library | transformers | |
---|---|---|
2 | 194 | |
834 | 135,925 | |
1.4% | 1.2% | |
5.2 | 10.0 | |
about 2 months ago | 1 day ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
differential-privacy-library
-
Well, crackers.
Differential privacy. Basically i wanted to create a randomly generated database file, akin to medical records, create a Private Aggregation of Teacher Ensembles algorithms based on 20-60% of its content and then use this teacher model on the other 80-40% of database which was just a plaintext, not that that matters. The problem is, I've barely got ideas on how it all works, and the one example I've found used Cryptonumeric's library called cn.protect. And that went like I've already described. I've fallen back on practical part of the paper and found another way of getting any practical usage as the assignment requires and now am trying to use https://github.com/IBM/differential-privacy-library and the example on 30s guide to instead make the practical part about choosing epsilon ( a measure of how much information you can give away as a result of one query on the database to a third malicious party) by tracking associated accuracy of result dataset compared to original. I hope I'll manage to edit the code to accept my text file after parsing it through into ndarray from txt, separating the last column to use as a target and going from there.
-
Differential Privacy project on Python
IBM's Diffprivlib is a well-documented implementation of differential privacy in Python. Source code and getting started documentation is available on the IBM differential-privacy-library Github repository.
transformers
-
Analyzing Hugging Face Posts with Graphs and Agents
[{'id': 'AI Music Generation', 'text': 'Love this new Space built by @enzostvs + @Xenova for Transformers.js: Generate your own AI music (In-browser generation) with AI Jukebox \n\nhttps://huggingface.co/spaces/enzostvs/ai-jukebox', 'score': 0.8460421562194824}, {'id': 'Kolmogorov Arnold Networks', 'text': 'Transformers are not all we need, that is being proven repeatedly now as more alternative frameworks emerge. Another such framework is Kolmogorov Arnold Network based Transformers. I break down exactly how these differ from Perceptron based Transformers and give you the link to my Colab where I create a model based on the research paper that absolutely destroys a standard Transformers based model. Check out the video here: https://www.youtube.com/watch?v=Sw0euxNZCc4', 'score': 0.8424240350723267}, {'id': 'GitHub Issue 8771', 'text': 'This issue is just a treasure ! A bit deprecated i guess, but things are in their historical context. (personally, still need more to understand better)\nhttps://github.com/huggingface/transformers/issues/8771\n\U0001fae1 to the man @stas ', 'score': 0.8417709469795227}]
- Running Phi 3 with vLLM and Ray Serve
-
Criando um LLM do zero com Transformers
Hugging Face Transformers
-
Recap Hacktober Fest
Issue: KeyError Update in Transformers This issue involved improving error handling for a KeyError in the Hugging Face Transformers library. Specifically, I worked on improving the error messages for better debugging, as users were facing difficulties in understanding why the error occurred during the _evaluate and _save_checkpoint functions.
-
Bugs in LLM Training – Gradient Accumulation Fix
>> disadvantage of Transformers codebase using the copy-paste method for models, where this fix needs to be applied to every single model separately
What are the best tools we have available for tackling this kind of large scale copy-paste change?
https://github.com/huggingface/transformers/pull/34191/commi...
This feels too complex to tackle with PyCharm structural find and replace, even a more powerful structural find and replace like https://comby.dev/ feels underpowered here.
Sourcegraph batch changes? That solves broadcasting the change but doesn’t help with capturing the change to make.
Open rewrite? The python implementation is early stages, not prod ready as I understand it.
What else is there that I don’t know about?
- Generative Audio
-
Happy Hacking: My First Hacktoberfest Pull Request 🎉
Update an keyerror on _save_check_point prevent confusion of missing … #33832
-
How to Learn Generative AI: A Step-by-Step Guide
Play around with OpenAI’s GPT models and Hugging Face's Transformers library.
-
Las 10 Mejores Herramientas de Inteligencia Artificial de Código Abierto
[(https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwbeic3x9gtowahgunjl.png)](https://github.com/huggingface/transformers)
-
10 Open Source MLOps Projects You Didn’t Know About
Don’t repeat yourself is a fundamental principle in computer science. As such, software engineers often use readily available code or pre-built solutions to commonly occurring problems. Consequently, it is common for large machine learning projects to rely on numerous other open source projects. For instance, transformers - a library commonly used to build transformer-based models - rely on over 1000s other open source projects.
What are some alternatives?
PyDP - The Python Differential Privacy Library. Built on top of: https://github.com/google/differential-privacy
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
awesome-machine-unlearning - Awesome Machine Unlearning (A Survey of Machine Unlearning)
sentence-transformers - State-of-the-Art Text Embeddings
data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
llama - Inference code for Llama models
fides - The Privacy Engineering & Compliance Framework
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
PrivacyEngCollabSpace - Privacy Engineering Collaboration Space
text-generation-webui - A Gradio web UI for Large Language Models.
keras - Deep Learning for humans [Moved to: https://github.com/keras-team/keras]
huggingface_hub - The official Python client for the Huggingface Hub.