xgboost
Surprise
Our great sponsors
xgboost | Surprise | |
---|---|---|
10 | 8 | |
25,548 | 6,178 | |
0.9% | - | |
9.6 | 0.0 | |
6 days ago | 12 months ago | |
C++ | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xgboost
- XGBoost 2.0
- XGBoost2.0
- Xgboost: Banding continuous variables vs keeping raw data
-
PSA: You don't need fancy stuff to do good work.
Finally, when it comes to building models and making predictions, Python and R have a plethora of options available. Libraries like scikit-learn, statsmodels, and TensorFlowin Python, or caret, randomForest, and xgboostin R, provide powerful machine learning algorithms and statistical models that can be applied to a wide range of problems. What's more, these libraries are open-source and have extensive documentation and community support, making it easy to learn and apply new techniques without needing specialized training or expensive software licenses.
-
XGBoost Save and Load Error
You can find the problem outlined here: https://github.com/dmlc/xgboost/issues/5826. u/hcho3 diagnosed the problem and corrected it as of XGB version 1.2.0.
-
For XGBoost (in Amazon SageMaker), one of the hyper parameters is num_round, for number of rounds to train. Does this mean cross validation?
Reference: https://github.com/dmlc/xgboost/issues/2031
-
CS Internship Questions
By the way, most of the time XGBoost works just as well for projects, would not recommend applying deep learning to every single problem you come across, it's something Stanford CS really likes to showcase when it's well known (1) that sometimes "smaller"/less complex models can perform just as well or have their own interpretive advantages and (2) it is well known within ML and DS communities that deep learning does not perform as well with tabular datasets and using deep learning as a default to every problem is just poor practice. However, if you do (god forbid) get language, speech/audio, vision/imaging, or even time series models then deep learning as a baseline is not the worst idea.
- OOM with ML Models (SKlearn, XGBoost, etc), workaround/tips for large datasets?
-
xgboost VS CXXGraph - a user suggested alternative
2 projects | 28 Feb 2022
- 'y contains previously unseen labels' (label encoder)
Surprise
-
Recommender Systems: Surprise library installation on m1 mac
Something is wrong with the repo. The compiler fails with this error clang: error: no such file or directory: 'surprise/similarities.c' If you go to the repo, you'll see the file is indeed missing: https://github.com/NicolasHug/Surprise/tree/master/surprise
-
Recommender systems question
Scikit-surprise is a useful package and has pretty good documentation to help make the leap from conceptual understanding to code. If you want to understand the various implementations, the package is open source and available on GitHub. I can’t speak for optimal computational efficiency but I think that it’s premature to worry about that while you’re still making the transition from concept to functionality.
- Surprise – a simple recommender system library for Python
-
Dislike button would improve Spotify's recommendations
I spent the latter half of 2019 trying to build this as a startup. Ultimately I pivoted (now I do newsletter recommendations instead), but if I hadn't made some mistakes I think it could've gotten more traction. Mostly I should've simplified the idea to make it easier to build. If anyone's interested in working on this, here's what I would do:
(But first some background: The way I saw it, you can split music recommendation into two tasks: (1) picking a song you already know that should be played right now, and (2) picking a new song you've never heard of before. (Music recommendation is unique in this way since in most other domains there isn't much value in re-recommending items). I think #1 is more important, and if you nail that, you can do a so-so job of #2 and still have a good system.)
Make a website that imports your Last.fm history. Organize the history into sessions (say, groups of listen events with a >= 30 minute gap in between). Feed those sessions into a collaborative filtering library like Surprise[1], as a CSV of `, , 1` (1 being a rating--in this case we only have positive ratings). Then make some UI that lets people create and export playlists. e.g. I pick a couple seed songs from my listening history, then the app uses Surprise to suggest more songs. Present a list of 10 songs at a time. Click a song to add it, and have a "skip all" button that gets a new list of songs. Save these interactions as ratings--e.g. if I skip a song, that's a -1 rating for this playlist. For some percentage of the suggestions (20% by default? Make it configurable), use Last.fm's or Spotify's API to pick a new song not in your history, based on the songs in the current playlist. Also sometimes include songs that were added to the playlist previously--if you skip them, they get removed from the playlist. Then you can spend a couple minutes every week refreshing your playlists. Export the playlists to Spotify/Apple Music/whatever.
As you get more users, you can do "regular" collaborative filtering (i.e. with different users) to recommend new songs instead of relying on external APIs. There are probably lots of other things you could do too--e.g. scrape wikipedia to figure out what artists have done collaborations or something. In general I think the right approach is to build a model for artist similarity rather than individual song similarity. At recommendation time, you pick an artist and then suggest their top songs (and sometimes pick an artist already in the user's history, and suggest songs they haven't heard yet--that's even easier).
This is the simplest thing I can think of that would solve my "I love music but I listen to the same old songs everyday because I'm busy and don't want to futz around with curating my music library" problem. You wouldn't have to waste time building a crappy custom music app, and users won't have to use said crappy custom music app (speaking from personal experience...). You wouldn't have to deal with music rights or integrating with Spotify/Apple Music since you're not actually playing any music.
If you want to go further with it, you could get traction first and then launch your own streaming service or something. (Reminds me a bit of Readwise starting with just highlights and then launching their own reader recently). I think it'd be neat to make an indie streaming service--kind of like Bandcamp but with an algorithm to help you find the good stuff. Let users upload and listen to their own MP3s so it can still work with popular music. Of course it'd be nicer for users in the short term if you just made deals with the big record labels, however this would help you not end up in Spotify's position of pivoting to podcasts so you can get out of paying record labels. And then maybe in a few decades all the good music won't be on the big labels anyway :).
Anyway if anyone is remotely interested in building something like this, I'll be your first user. I really need it. Otherwise I'll probably build it myself at some point in the next year or two as a side project.
[1] http://surpriselib.com/
-
Show HN: The Sample – newsletters curated for you with machine learning
I'm planning to build a business on this, so probably won't open-source it--but I'm always looking for interesting things to write about! I write a weekly newsletter called Future of Discovery[1]; I might write up some more implementation details there in a week or two. In the mean time, most of the heavy lifting is done by the Surprise python lib[2]. It's pretty easy to play around with, just give it a csv of , , and then you can start making rating predictions. Also fastText[3] is easy to mess around with too. Most of the code I've written just layers things on top of that, e.g. to handle exploration-vs-exploitation as discussed in another thread here.
Recently I've been factoring out the ML code into a separate recommendation service so it can different kinds of apps (I just barely made this essay recommender system[4] start using it for example).
I'm happy to chat about recommender systems also if you like, email's in my profile.
[1] https://findka.com
[2] http://surpriselib.com/
[3] https://fasttext.cc/
[4] https://essays.findka.com
What are some alternatives?
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.
MLP Classifier - A handwritten multilayer perceptron classifer using numpy.
scikit-learn - scikit-learn: machine learning in Python
tensorflow - An Open Source Machine Learning Framework for Everyone
Keras - Deep Learning for humans
python-recsys - A python library for implementing a recommender system
mlpack - mlpack: a fast, header-only C++ machine learning library
Crab - Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
MLflow - Open source platform for the machine learning lifecycle