bench-warmers
scikit-learn
Our great sponsors
bench-warmers | scikit-learn | |
---|---|---|
6 | 81 | |
54 | 58,130 | |
- | 1.1% | |
9.7 | 9.9 | |
16 days ago | 2 days ago | |
Python | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bench-warmers
-
What to do next?
i have more ideas than I know what to do with, help yourself: https://github.com/dmarx/bench-warmers
-
Any ideas for NLP end-to-end projects or blogs for a beginner with a linguistics background to boost their CV?
you're welcome to help yourself to my ideas (no guarantees that they're any good or even comprehensible, I do a lot of my brainstorming while high). here's my brainstorming space, scroll down for a categorized ToC: https://github.com/dmarx/bench-warmers
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
I've decided to just lean into it and am literally just giving my ideas away. https://github.com/dmarx/bench-warmers
-
Using Github to write my notes has helped me retain knowledge immensely.
it might sound like a lot, but it's actually really lightweight and easy to use. Check it out: https://github.com/dmarx/bench-warmers
-
We are the developers behind pandas, currently preparing for the 2.0 release :) AMA
you've sort of become victims of your own success: as another pandas dev mentioned, you want to preserve backwards compatibility and this significantly complicates any restructuring. I'm sympathetic and am not sure what the best solution here would be. I had this idea last night but i'm not sure I like this approach either.
-
Need help on finding an area where machine learning is applicable on day-to-day life but not implemented already
To be clear, i'm talking about e.g. vision impaired, hearing impaired, etc. Here's an example of a project idea in this space (possibly a bit more ambitious than what you're looking for but if you think you could tackle this I encourage you to take a stab at it): https://github.com/dmarx/bench-warmers/blob/main/automated-video-description.md
scikit-learn
-
AutoCodeRover resolves 22% of real-world GitHub in SWE-bench lite
Thank you for your interest. There are some interesting examples in the SWE-bench-lite benchmark which are resolved by AutoCodeRover:
- From sympy: https://github.com/sympy/sympy/issues/13643. AutoCodeRover's patch for it: https://github.com/nus-apr/auto-code-rover/blob/main/results...
- Another one from scikit-learn: https://github.com/scikit-learn/scikit-learn/issues/13070. AutoCodeRover's patch (https://github.com/nus-apr/auto-code-rover/blob/main/results...) modified a few lines below (compared to the developer patch) and wrote a different comment.
There are more examples in the results directory (https://github.com/nus-apr/auto-code-rover/tree/main/results).
-
Polars
sklearn is adding support through the dataframe interchange protocol (https://github.com/scikit-learn/scikit-learn/issues/25896). scipy, as far as I know, doesn't explicitly support dataframes (it just happens to work when you wrap a Series in `np.array` or `np.asarray`). I don't know about PyTorch but in general you can convert to numpy.
-
[D] Major bug in Scikit-Learn's implementation of F-1 score
Wow, from the upvotes on this comment, it really seems like a lot of people think that this is the correct behavior! I have to say I disagree, but if that's what you think, don't just sit there upvoting comments on Reddit; instead go to this PR and tell the Scikit-Learn maintainers not to "fix" this "bug", which they are currently planning to do!
- Contraction Clustering (RASTER): A fast clustering algorithm
-
Ask HN: Learning new coding patterns – how to start?
I was in a similar boat to yours - Worked in data science and since then have made a move to data engineering and software engineering for ML services.
I would recommend you look into the Design Patterns book by the Gang of Four. I found it particularly helpful to make extensible code that doesn't break specially with abstract classes, builders and factories. I would also recommend looking into the book The Object Oriented Thought Process to understand why traditional OOP is build the way it is.
You can also look into the source code of popular data science libraries such as sklearn (https://github.com/scikit-learn/scikit-learn/tree/main/sklea...) and see how a lot of them have Base classes to define shared functionality between object of the same nature.
As others mentioned, I would also encourage you to try and implement design patterns in your everyday work - maybe you can make a Factory to load models or preprocessors that follow the same Abstract class?
-
Transformers as Support Vector Machines
It looks like you've been the victim of some misinformation. As Dr_Birdbrain said, an SVM is a convex problem with unique global optimum. sklearn.SVC relies on libsvm which initializes the weights to 0 [0]. The random state is only used to shuffle the data to make probability estimates with Platt scaling [1]. Of the random_state parameter, the sklearn documentation for SVC [2] says
Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False. Pass an int for reproducible output across multiple function calls. See Glossary.
[0] https://github.com/scikit-learn/scikit-learn/blob/2a2772a87b...
[1] https://en.wikipedia.org/wiki/Platt_scaling
[2] https://scikit-learn.org/stable/modules/generated/sklearn.sv...
-
How to Build and Deploy a Machine Learning model using Docker
Scikit-learn Documentation
- Planning to get a laptop for ML/DL, is this good enough at the price point or are there better options at/below this price point?
-
Link Prediction With node2vec in Physics Collaboration Network
Firstly, we need a connection to Memgraph so we can get edges, split them into two parts (train set and test set). For edge splitting, we will use scikit-learn. In order to make a connection towards Memgraph, we will use gqlalchemy.
-
WiFilter is a RaspAP install extended with a squidGuard proxy to filter adult content. Great solution for a family, schools and/or public access point
The ML component is based on scikit-learn which differentiates it from purely list-based filters. It couples this with a full-featured wireless router (RaspAP) in a single device, so it fulfills the needs of a use case not entirely addressed by Pi-hole.
What are some alternatives?
khoj - Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
notes
Surprise - A Python scikit for building and analyzing recommender systems
LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Keras - Deep Learning for humans
python-bigquery-pandas - Google BigQuery connector for pandas
tensorflow - An Open Source Machine Learning Framework for Everyone
pandas-stubs - Public type stubs for pandas
gensim - Topic Modelling for Humans
obsidian-omnisearch - A search engine that "just works" for Obsidian. Supports OCR and PDF indexing.
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.