wembedder
Wikidata embedding (by fnielsen)
marius
Large scale graph learning on a single machine. (by marius-team)
wembedder | marius | |
---|---|---|
1 | 1 | |
49 | 159 | |
- | 1.9% | |
0.0 | 3.9 | |
almost 3 years ago | 2 months ago | |
Python | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wembedder
Posts with mentions or reviews of wembedder.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-05-28.
-
[D] Graph embeddings of Wikidata items
I have made Wembedder that is using a simple RDF2Vec model, that you might try. You can download it from https://github.com/fnielsen/wembedder The current pre-trained model running at https://wembedder.toolforge.org is pretty small with only around 600.000 Wikidata items to fit the size of the Toolforge cloud service. It means that the Python programming language is in the model, but not the snake nor Django :/.
marius
Posts with mentions or reviews of marius.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-05-28.
-
[D] Graph embeddings of Wikidata items
I'm not aware of any other publicly available pre-trained embeddings on Wikidata. However, I'm the author of a recently published competing open-source system and one of my TODOs is to train DistMult embeddings on Wikidata and make them publicly available.
What are some alternatives?
When comparing wembedder and marius you can also consider the following projects:
danker - Compute PageRank on >3 billion Wikipedia links on off-the-shelf hardware.
Fast_Sentence_Embeddings - Compute Sentence Embeddings Fast!