pymobiledevice3
llama_index
pymobiledevice3 | llama_index | |
---|---|---|
4 | 75 | |
1,027 | 31,184 | |
- | 4.7% | |
9.7 | 10.0 | |
1 day ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pymobiledevice3
- FLaNK Stack Weekly for 27 November 2023
-
[$25][12.5.5] Modify MGSpoof to spoof the UDID reported by Lockdownd
However, when checking with some basic Python on the computer (specifically using https://github.com/doronz88/pymobiledevice3 to communicate with lockdownd) it still reported the device's actual UDID. The tweak hooks _MGCopyAnswer and lockdownd appears to get the UDID using that exact method. This is seen by popping the lockdownd binary in Ghidra disassembler: https://imgur.com/a/Hz6R5zT
- Sniffing syscalls on macOS and iOS made easy
- [TOOL] pymobiledevice3 is a better libimobiledevice purely in python
llama_index
- LlamaIndex: A data framework for your LLM applications
- FLaNK AI - 01 April 2024
-
Show HN: Ragdoll Studio (fka Arthas.AI) is the FOSS alternative to character.ai
For anyone curious llamaindex's "prompt mixins", they're actually dead simple: https://github.com/run-llama/llama_index/blob/8a8324008764a7... - and maybe no longer supported.
I basically reinvented this wheel in ragdoll but made it more dynamic: https://github.com/bennyschmidt/ragdoll/blob/master/src/util...
- LlamaIndex is a data framework for your LLM applications
- How to verify that a snippet of Python code doesn't access protected members
-
🆓 Local & Open Source AI: a kind ollama & LlamaIndex intro
Being able to plug third party frameworks (Langchain, LlamaIndex) so you can build complex projects
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
Mistral Instruct does use a system prompt.
You can see the raw format here: https://www.promptingguide.ai/models/mistral-7b#chat-templat... and you can see how LllamaIndex uses it here (as an example): https://github.com/run-llama/llama_index/blob/1d861a9440cdc9...
-
Top 5 Vector Database Videos of 2023 🎥
Learn how to use Milvus as persistent vector storage with LlamaIndex in under 5 minutes.
-
What's going on in the Zilliz Universe? December 2023
▶️ Read Blog 📷 Watch Demo 🦙 Notebook using Pipelines inside LlamaIndex
-
First 15 Open Source Advent projects
15. LlamaIndex | Github | tutorial
What are some alternatives?
KivyMD - KivyMD is a collection of Material Design compliant widgets for use with Kivy, a framework for cross-platform, touch-enabled graphical applications. https://youtube.com/c/KivyMD https://twitter.com/KivyMD https://habr.com/ru/users/kivymd https://stackoverflow.com/tags/kivymd
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
afc-gui - GUI for the asus-fan-control project
langchain - 🦜🔗 Build context-aware reasoning applications
MGSpoof - Hook MGCopyAnswer + custom helper so user can spoof some keys
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
MGSpoof - Hook MGCopyAnswer + custom helper so user can spoof some keys
chatgpt-retrieval-plugin - The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
kivy - Open source UI framework written in Python, running on Windows, Linux, macOS, Android and iOS
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama-recipes - Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
gpt-llama.cpp - A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.