The APIs are flexible and easytouse, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Vowpal_wabbit Alternatives
Similar projects and alternatives to vowpal_wabbit


countwords
Discontinued Playing with counting word frequencies (and performance) in various languages.

WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easytouse, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

xgboost
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

catboost
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.


Recommender
A C library for product recommendations/suggestions using collaborative filtering (CF)


InfluxDB
Power RealTime Data Analytics at Scale. Get realtime insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in realtime with unbounded cardinality.


CCV
Cbased/Cached/Core Computer Vision Library, A Modern Computer Vision Library


mxnet
Discontinued Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutationaware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

napkinXC
Extremely simple and fast extreme multiclass and multilabel classifiers.

RLOSFest2022_vowpal_wabbit
Discontinued Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

SaaSHub
SaaSHub  Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
vowpal_wabbit reviews and mentions

Data Science terminology can be wild
Let me introduce you to my friend Vopal Wabbit. https://vowpalwabbit.org/

Microsoft Reinforcement Learning Open Source Fest 2022 – Native CSV Parser
My project here at the Reinforcement Learning Open Source Fest 2022 is to add the native CSV parsing feature for the Vowpal Wabbit.

[Discussion] Support Vector Machines... in 2022
2) A lot of people have worked on making SVMs scalable. But you have to realize that scikit learn is a library that only offers basic versions of basic algorithms, so if you think you'll find advanced approaches to SVM scalability there you'll end up disappointed. But there are versions that use GPUs or a multithreaded implementation for the computation of the kernel matrices (which is the most expensive part of training). The "solver" (=optimizer) part of an SVM can also be scaled. scikitlearn relies on libsvm for its solver, which by default is single threaded. But nothing stops you from applying other solvers (e.g. Gradient Descent, with all of it's niceties) to the problem. I'm not sure what's the state of the art today, but it used to be that e.g. vowpal wabbit had a bunch of really fast, scalable SVM algorithms.
 Solving problems by mapping them to other problems that we know how to solve

[Q] Is picking up a CS major worth it if it means having to take 5 STEM classes a semester for another two years?
It sounds like this may not need to even be distributed, but hard to tell. This may be a case where an outofcore design is sufficient to get the job done. Usually this involves some multithreaded programming with on set of threads doing the IO work, reading/writing to disk, while another set of threads doing the compute  and having an input/output queue. Outofcore is in many ways an extension of the blocking/tiling approach, extended all the way to disk as the slowest level of memory access  with some extra programming differences due to it being disk instead of RAM. A few tools have builtin support for this like VW.

Predicting numerical values to a very high accuracy
If you only have 198 possible values then extreme multiclass models might benefit here with better precision and faster convergence. For example probabilistic label trees might have some relevance. Vowpal Wabbit also has specific reductions for extreme multi class problems. Might be worth a try if other alternatives still don't work out.

[Table] We are Microsoft researchers working on machine learning and reinforcement learning. Ask Dr. John Langford and Dr. Akshay Krishnamurthy anything about contextual bandits, RL agents, RL algorithms, RealWorld RL, and more!
Questions Answers AFAIK most modelbased reinforcement learning algorithms are more data efficient than modelfree (that don't create an explicit model of the environment). However, all the modelbased techniques I've seen eventually "throw away" data and stop using it for model training. Could we do better (lower sample complexity) if we didn't throw away old data? I imagine an algorithm that keeps track of all past observations as "paths" through perception space, and can use something akin to nearest neighbor to identify when it is seeing a similar "path" again in the future. I.e., what if the model learned a compression from perception space into a lower dimension representation (like the first 10 principle components), could we then record all data and make predictions about future states with nearest neighbor? This method would benefit from "immediate learning". Does this direction sound promising? Definitely. This is highly related to the latent space discovery research direction of which we've had several recent papers at ICLR, NeurIPs, ICML. There are several challenging elements here. You need to learn nonlinear maps, you need to use partial learning to gather information for more learning, and it all needs to be scalable. John Hello, do you have any events in New York? I've been teaching myself for the last couple years on ML and AI theory and practice but would love accelerate my learning by working on stuff (could be for free). I have 7 years of professional programming experience and work as a lead for a large financial company. Well, we have "Reinforcement Learning day" each year. I'm really looking forward to the pandemic being over because we have a beautiful new office at 300 Lafayettemore might start happening when we can open up. John RL seems to more strategy oriented/original than the papers I observe in other areas of ML and Deep Learning, which seems to be more about adding layers upon layers to get slightly better metrics. What is your opinion about it ? Secondly I would love to know the role RL in real world applications. By strategy I guess you mean "algorithmic." I think both areas are fairly algorithmic nature. There have been some very cool computational advancements involved in getting certain architectures (like transformers) to scale and similarly there are many algorithmic advancements in domain adaptation, robustness, etc. RL is definitely fairly algorithmically focused, which I like =) RL problems are kind of ubiquitous, since optimizing for some value is a basic primitive. The question is whether ""standard RL"" methods should be used to solve these problems or not. I think this requires some trialanderror and, at least with current capabilities, some deeper understand of the specific problem you are interested in. Akshay Dr. Langford & Dr. Krishnamurthy, Thank you for this AMA. My question: From what I understand about RL, there are trade offs one must consider between computational complexity and sample efficiency for given RL algorithms. What do you both prioritize when developing your algorithms? I tend to think first about statistical/sample efficiency. The basic observation is that computational complexity is gated by sample complexity because minimally you have to read in all of your samples. Additionally, understanding what is possible statistically seems quite a bit easier than understanding this computationally (e.g., computational lower bounds are much harder to prove that statistical ones). Obviously both are important, but you can't have a computationally efficient algorithm that requires exponentially many samples to achieve nearoptimality, while you can have the converse (statistically efficient algorithm that requires exponential time to achieve nearoptimality). This suggests you should go after the statistics first. Akshay Can you share some real examples of how your work has made its way into MS products? Is this a requirement for any work that happens at MSR or is it more like an independent entity and is not always required to tie back into something within Microsoft? A simple answer is that Vowpal Wabbit (http://vowpalwabbit.org ) is used by the personalizer service (http://aka.ms/personalizer ). Many individual research projects have impacted Microsoft in various ways as well. However, many research projects have not. In general, Microsoft Research exists to explore possibilities. Inherent in the exploration of possibilities is the discovery that many possibilities do not work.  John What are some of the obstacles getting in the way of widespread applications of online and offline RL learning for realworld scenarios, and what research avenues look promising to you that could chip away at, or sidestep, the obstacles? I suppose there are many obstacles and the most notable one is that we don't have sample efficient algorithms that can operate at scale. There are other issues like safety, stability, etc., that will matter depending on the application. The community is working on all of these issues, but in the meantime, I like all of the sidestepping ideas people are trying. Leveraging strong inductive bias (via pretrained representation or state abstraction or prior), simtoreal, imitation learning. These all seem very worthwhile to pursue. I am in favor of the trying everything and seeing what sticks, because different problems might admit different structures, so it's important to have a suite of tools at our disposal. On sample efficiency, I like the model based approach as it has many advantages (obvious supervision signal, offline planning, zeroshot transfer to a new reward function, etc.). So (a) fitting accurate dynamics models, (b) efficient planning in such models, and (c) using them to explore, all seem like good questions to study. We have some recent work on this approach (https://arxiv.org/abs/2006.10814) Akshay Hi! Thanks for doing this AMA. What is the status of Real World RL? What are the practical areas that RL is being applied to in the real world right now? There are certainly many deployments of real world RL. This blog post: https://blogs.microsoft.com/ai/reinforcementlearning/ covers a number related to work at Microsoft. In terms of where we are, I'd say "at the beginning". There are many applications that haven't even been tried, a few that have, and lots of room for improvement. John With the xbox series X having hardware for machine learning, what kind of applications of this apply to gaming? An immediate answer is to use RL to control nonplayercharacters. Akshay How can I prepare in order to be part of Microsoft Researcher in Reinforcement Learning? This depends on the role you are interested in. We try to post new reqs here (http://aka.ms/rl_hiring ) and have hired in researcher, engineer, and applied/data scientist roles. For a researcher role, a phd is typically required. The other roles each have their own reqs. John What is latent state discovery and why do you think it is important in real world RL ? Latent state discovery is an approach for getting reinforcement learning to provably scale to complex domains. The basic idea is to decouple of the dynamics which are determined by a simple latent state space, from an observation process, which could be arbitrarily complex. The natural example is a visual navigation task: there are far fewer locations in the world, than visual inputs you might see at those locations. The ""discovery"" aspect is that we don't want to know this latent state space in advance, so we need to learn how to map observations to latent states if we want to plan and explore. Essentially this is a latent dynamics modeling approach, where we use the latent state to drive exploration (such ideas are also gaining favor in the DeepRL literature). The latent state approach has enabled us to develop essentially the only provably efficient exploration methods for such complex environments (using arbitrary nonlinear function approximation). In this sense, it seems like a promising approach for real world settings where exploration is essential. Akshay

[D] Is there a way to evaluate model during training?
Implemented in vowpal wabbit: https://github.com/VowpalWabbit/vowpal_wabbit

We are Microsoft researchers working on machine learning and reinforcement learning. Ask Dr. John Langford and Dr. Akshay Krishnamurthy anything about contextual bandits, RL agents, RL algorithms, RealWorld RL, and more!
A simple answer is that Vowpal Wabbit (http://vowpalwabbit.org ) is used by the personalizer service (http://aka.ms/personalizer ). Many individual research projects have impacted Microsoft in various ways as well. However, many research projects have not. In general, Microsoft Research exists to explore possibilities. Inherent in the exploration of possibilities is the discovery that many possibilities do not work.  John

Performance comparison: counting words in Python, Go, C++, C, AWK, Forth, and Rust
You're likely correct, but I do recall attending a lecture by John Langford of https://vowpalwabbit.org/ running some form of an Ngram C++ based NLP model, including summary statistics on performance, in less time than wc l took on the same data. Must have some neat hashing tricks, but still was cool

A note from our sponsor  WorkOS
workos.com  18 Apr 2024
Stats
VowpalWabbit/vowpal_wabbit is an open source project licensed under GNU General Public License v3.0 or later which is an OSI approved license.
The primary programming language of vowpal_wabbit is C++.