MultiEL
Multilingual Entity Linking model by BELA model (by PyThaiNLP)
rebel
REBEL is a seq2seq model that simplifies Relation Extraction (EMNLP 2021). (by Babelscape)
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MultiEL
Posts with mentions or reviews of MultiEL.
We have used some of these posts to build our list of alternatives
and similar projects.
rebel
Posts with mentions or reviews of rebel.
We have used some of these posts to build our list of alternatives
and similar projects.
-
train_epoch_end doesnt compute train metrics, same code works with val_epoch_end - any ideas?
Hey together, I am training a transformer based on public code in a github repo (https://github.com/Babelscape/rebel) and noticed they are not collecting training metrics apart from the loss. Now I would like to visualize the train-set performance as well as the validation-set performance, as this would give me further clues in regards to overfitting etc.
What are some alternatives?
When comparing MultiEL and rebel you can also consider the following projects:
this-word-does-not-exist - This Word Does Not Exist
DeepKE - [EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction