examples
mwparserfromhell
examples | mwparserfromhell | |
---|---|---|
7 | 5 | |
2,525 | 716 | |
3.6% | - | |
9.3 | 6.6 | |
1 day ago | about 1 month ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
examples
- RAG with Groq and Llama 3
- Alternative Chunking Methods
- FLaNK AI Weekly for 29 April 2024
- I’m working on making a ChatGPT app with long term memory
- I gave GPT-4 persistent memory and the ability to self improve
-
Cheating Is All You Need
https://github.com/openai/openai-cookbook/blob/main/examples...
https://github.com/pinecone-io/examples/blob/master/generati...
https://www.pinecone.io/learn/openai-gen-qa/
https://www.youtube.com/watch?v=tBJ-CTKG2dM&t=787s&ab_channe...
There are more out there but hopefully this gets you started.
- Dev Diary #13 - Cloud Vector DB
mwparserfromhell
- FLaNK AI Weekly for 29 April 2024
-
Processing Wikipedia Dumps With Python
There's also https://github.com/earwig/mwparserfromhell, if you don't want to roll your own.
-
[Python] How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
In particular what you're looking at is not XML but wikitext. I found a discussion on stackoverflow about solving the same problem of getting text from wikitext. Seems like the most promising solution in Python since you already have the dump is to run each page through mwparserfromhell. According to the top stackoverflow answer you could use something like
-
How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
Thank you so much! I was actually talking about the markup language within the text. Turns out it's proprietary to WikiMedia and user lowerthansound kindly suggested I use this: https://github.com/earwig/mwparserfromhell
What are some alternatives?
openai-cookbook - Examples and guides for using the OpenAI API
wikitextparser - A Python library to parse MediaWiki WikiText
AssistGPT - A GPT client with long term memory
archwiki - MediaWiki used on Arch Linux websites (read-only mirror)
MiniCPM-V - MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
WiktionaryParser - A Python Wiktionary Parser
frawk - an efficient awk-like language
wikiteam - Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2023, WikiTeam has preserved more than 350,000 wikis.
gptchat - A GPT-4 client which gives your favourite AI a memory and tools for self-improvement
pywikibot - A Python library that interfaces with the MediaWiki API. This is a mirror from gerrit.wikimedia.org. Do not submit any patches here. See https://www.mediawiki.org/wiki/Developer_account for contributing.
isbntools - python app/framework for 'all things ISBN' including metadata, descriptions, covers...
wiki_dump - A library that assists in traversing and downloading from Wikimedia Data Dumps and their mirrors.