Giveme5W1H
datefinder
Giveme5W1H | datefinder | |
---|---|---|
1 | 3 | |
500 | 625 | |
- | - | |
0.0 | 0.0 | |
8 months ago | about 1 year ago | |
HTML | HTML | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Giveme5W1H
-
Date extraction from text code/API's
https://github.com/fhamborg/Giveme5W1H (if you can get it running, I was unable to, maybe try python <3.8)
datefinder
-
Sneller Regex vs Ripgrep
That's with DFA minimization. Also, '\w' has 311 states while '(?-u)\w' has 5 states.
I don't have a precise definition of enormous or impractical. Does it matter? I suppose one obvious one is when DFA construction time starts having a significant impact on total search times.
> Additionally, the results are not the same: the number of matches is not equal to 7882. How could I make `\w` conform to other regex implementations in ripgrep?
By following UTS#18: https://unicode.org/reports/tr18/#word
Most regex engines make \w be ASCII-only by default. But most also have a way to opt into Unicode-aware mode. RE2, Go's regexp and ECMAScript are popular regex engines that have no way to change the interpretation of \w.
> Fair question how regex compilers handle nefarious regexes. Go does not handle NFA with more than 1000 states, and, as you observed, we added some more restrictions when processing the NFA. It can be an interesting academic exercise to find monstrous regexes, but we haven't encountered useful regexes that hit these limits. But I guess you know some...
It's definitely not academic. People use regexes for lexers. People use big regexes to recognize certain things like email addresses and dates. Here's a real regex used in real software to identify dates in unstructured text for example: https://github.com/akoumjian/datefinder/blob/5376ece0a522c44...
Otherwise, as I hinted at above, the thing that can make regexes very large very quickly is when you mix Unicode classes with counted repetitions. It doesn't take a lot to make them "big."
- Is there a Python library for reading human-written times?
-
Tuesday Daily Thread: Advanced questions
Looking at this issue it seems a recent pull request should fix the strict mode problem. That said, the pull request is still open due to a failing test so you can either build from source with the pull request or looking at the comments in the issue, look at dateparser as is mentioned. It might suit your needs.
What are some alternatives?
FARM - :house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
dateparser - python parser for human readable dates
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
timefhuman - Convert natural language date-like strings--dates, date ranges, and lists of dates--to Python objects
ctparse - Parse natural language time expressions in python
Sherlock - Natural-language event parser for Javascript
duckling - Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
pyate - PYthon Automated Term Extraction
extractnet - A fork of Dragnet that also extract author, headline, date, keywords from context, as well as built in metadata extraction all in one package
sneller - World's fastest log analysis: λ + SQL + JSON + S3
haxe.io - The home of the Haxe Roundup's (Work in Progress)
Crafting Interpreters - Repository for the book "Crafting Interpreters"