marker
LocalAI
marker | LocalAI | |
---|---|---|
8 | 83 | |
8,225 | 19,862 | |
- | 8.3% | |
7.8 | 9.9 | |
4 days ago | 5 days ago | |
Python | C++ | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marker
-
LlamaCloud and LlamaParse
You may want to try https://github.com/VikParuchuri/surya (I'm the author). I've only benchmarked against tesseract, but it outperforms it by a lot (benchmarks in repo). Happy to discuss.
You could also try https://github.com/VikParuchuri/marker for general PDF parsing (I'm also the author) - it seems like you're more focused on tables.
-
Show HN: Texify – OCR math images to LaTeX and Markdown
Hi HN - I made texify to convert equations to markdown/LaTeX for my project marker [1] then realized it could be generally useful.
Texify converts equations and surrounding text to Markdown, with embedded LaTeX (MathJax compatible).
You can either use a GUI to select equations (inline or block) from PDFs and images to convert, or use the CLI to batch convert images. It works on CPU, GPU, or MPS (Mac).
The closest open source comparisons are pix2tex and nougat - marker is more accurate than both of them for this task. However, nougat is more for entire pages, and pix2tex is more for block equations (not inline equations and text).
I trained texify for 2 days on 4x A6000 GPUs - I was pleasantly surprised how far I could get with limited GPU resources by reframing the problem to use small parameter counts/images.
Texify is licensed for commercial use, with the weights under CC-BY-SA 4.0. Fine them here - https://huggingface.co/vikp/texify .
See the texify repo for more details, benchmarks, how to install, etc.
[1] https://github.com/VikParuchuri/marker
-
Show HN: Talk to any ArXiv paper just by changing the URL
https://github.com/VikParuchuri/marker
Both are tools to convert pdfs into Latex or Markup with latex formulas. Maybe that helps
- FLaNK Stack Weekly 11 Dec 2023
- Marker: Convert PDF to Markdown quickly with high accuracy
- FLaNK Stack for 04 December 2023
LocalAI
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
What are some alternatives?
voyager - 🛰️ An approximate nearest-neighbor search library for Python and Java with a focus on ease of use, simplicity, and deployability.
gpt4all - gpt4all: run open-source LLMs anywhere
llmsherpa - Developer APIs to Accelerate LLM Projects
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
PyMuPDF - PyMuPDF is a high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.
llama-cpp-python - Python bindings for llama.cpp
node-gtk - GTK+ bindings for NodeJS (via GObject introspection)
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
langchain4j - Java version of LangChain
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.