sketch
zillion
sketch | zillion | |
---|---|---|
20 | 11 | |
2,198 | 154 | |
0.9% | - | |
4.4 | 7.2 | |
3 months ago | 3 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sketch
-
Ask HN: What have you built with LLMs?
We've made a lot of data tooling things based on LLMs, and are in the process of rebranding and launching our main product.
1. sketch (in notebook, ai for pandas) https://github.com/approximatelabs/sketch
2. datadm (open source, "chat with data", with support for the open source LLMs (https://github.com/approximatelabs/datadm)
3. Our main product: julyp. https://julyp.com/ (currently under very active rebrand and cleanup) -- but a "chat with data" style app, with a lot of specialized features. I'm also streaming me using it (and sometimes building it) every weekday on twitch to solve misc data problems (https://www.twitch.tv/bluecoconut)
For your next question, about the stack and deploy:
-
Pandas AI β The Future of Data Analysis
This morning I added a "Related Projects" [3] Section to the Buckaroo docs. If Buckaroo doesn't solve your problem, look at one of the other linked projects (like Mito).
[1] https://github.com/approximatelabs/sketch
[2] https://github.com/paddymul/buckaroo
[3] https://buckaroo-data.readthedocs.io/en/latest/FAQ.html
-
Ask HN: What's your favorite GPT powered tool?
For GPT/Copilot style help for pandas, in notebooks REPL flow (without needing to install plugins), I built sketch. I genuinely use it every-time I'm working on pandas dataframes for a quick one-off analysis. Just makes the iteration loop so much faster. (Specifically the `.sketch.howto`, anecdotally I actually don't use `.sketch.ask` anymore)
https://github.com/approximatelabs/sketch
-
RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
https://github.com/approximatelabs/lambdaprompt It has served all of my personal use-cases since making it, including powering `sketch` (copilot for pandas) https://github.com/approximatelabs/sketch
Core things it does: Uses jinja templates, does sync and async, and most importantly treats LLM completion endpoints as "function calls", which you can compose and build structures around just with simple python. I also combined it with fastapi so you can just serve up any templates you want directly as rest endpoints. It also offers callback hooks so you can log & trace execution graphs.
All together its only ~600 lines of python.
I haven't had a chance to really push all the different examples out there, but most "complex behaviors", so there aren't many patterns to copy. But if you're comfortable in python, then I think it offers a pretty good interface.
I hope to get back to it sometime in the next week to introduce local-mode (eg. all the open source smaller models are now available, I want to make those first-class)
-
[D] The best way to train an LLM on company data
Please look at sketch and langchain pandas/SQL plugins. I have seen excellent results with both of these approaches. Both of these approaches will require you to send metadata to openAI.
-
Meet Sketch: An AI code Writing Assistant For Pandas
π Understand your data through questions π Create code from plain text Quick Read: https://www.marktechpost.com/2023/02/01/meet-sketch-an-ai-code-writing-assistant-for-pandas/ Github: https://github.com/approximatelabs/sketch
-
Replacing a SQL analyst with 26 recursive GPT prompts
(3) Asking for re-writes of failed queries (happens occasionally) also helps
The main challenge I think with a lot of these "look it works" tools for data applications, is how do you get an interface that actually will be easy to adopt. The chat-bot style shown here (discord and slack integration) I can see being really valuable, as I believe there has been some traction with these style integrations with data catalog systems recently. People like to ask data questions to other people in slack, adding a bot that tries to answer might short-circuit a lot of this!
We built a prototype where we applied similar techniques to the pandas-code-writing part of the stack, trying to help keep data scientists / data analysts "in flow", integrating the code answers in notebooks (similar to how co-pilot puts suggestions in-line) -- and released https://github.com/approximatelabs/sketch a little while ago.
-
FLiP Stack Weekly for 21 Jan 2023
Python AI Helper https://github.com/approximatelabs/sketch
- LangChain: Build AI apps with LLMs through composability
- Show HN: Sketch β AI code-writing assistant that understands data content
zillion
-
Let's Talk about Joins
I've also been frustrated when testing out tools that kinda keep you locked into one predetermined view, table, or set of tables at a time. I made a semantic data modeling library that puts together queries (and of course joins) for you as it uses a drill-across querying technique, and can also join data across different data sources in a secondary execution layer.
https://github.com/totalhack/zillion
Disclaimer: this project is currently a one man show, though I use it in production at my own company.
-
Ask HN: Show me your half baked project
https://github.com/totalhack/zillion
Semantic data warehousing and analytics tool written in python. It has experimental/half-baked NLP features to query your warehouse by interacting with the semantic layer with AI, instead of the normal approach of having an LLM write SQL and needing to know your entire schema.
- So I watched a few videos about Fabric, and started to cry a little...
-
Zillion - Semantic data modeling and analytics with a sprinkle of AI
Hey All, I wanted to share Zillion -- an open source Python data modeling and analytics library with experimental natural language features powered by OpenAI, LangChain, and Qdrant. Zillion acts as a semantic layer on top of your data, writes SQL so you don't have to, and easily bolts onto existing database infrastructure via SQLAlchemy Core.
-
Ask HN: Most interesting tech you built for just yourself?
Built it for me, but available to all -- Zillion: a python data modeling and analytics library.
https://github.com/totalhack/zillion
-
Zillion - Data modeling and analytics with a sprinkle of AI
More details/docs can be found in the GitHub repo: https://github.com/totalhack/zillion
-
πΌπ¬ BabyDS: An AI powered Data Analysis pipeline
Nice work. I had considered implementing something similar in https://github.com/totalhack/zillion down the road, probably as a layer on top.
-
Ask HN: Those making $0/month or less on side projects β Show and tell
Zillion: https://github.com/totalhack/zillion
A python data warehousing / modeling / analytics library that can unify multiple datasources and writes SQL for you. It's alpha level at the moment and I just slowly chip away when time allows, though I'm using it in production in another project (which does make money).
-
Replacing a SQL analyst with 26 recursive GPT prompts
This seems fun, but certainly unnecessary. All of those questions could be answered in seconds using a warehouse tool like Looker or Metabase or https://github.com/totalhack/zillion (disclaimer: I'm the author and this is alpha-level stuff, though I use it regularly).
-
PRQL a simple, powerful, pipelined SQL replacement
At first glance this seems more confusing, particularly the grouping/aggregation syntax, though I suppose that's something I'd just get used to. Some of the syntactic sugar is nice, but some things are also unlike SQL for no apparent reason which just makes adoption harder than necessary (join syntax for example).
IMO the main selling point would be the "database agnostic" part, but I already achieve that through SQLAlchemy Core and/or a warehouse layer like https://github.com/totalhack/zillion (disclaimer: I'm the author and this is alpha-level stuff, though I use it regularly). It seems like many newer DB technologies/services I'd want to use either speak PostgreSQL or MySQL wire protocol anyway.
The roadmap is worth a read, as it notes some limitations and expected challenges supporting the wide variety of DBMS features and syntax. That said, I can see where this might be useful in the cases where I do have to jump into direct SQL, but want the flexibility to easily switch the back end DB for that code -- that's assuming it can cover the use cases that forced me to write direct SQL in the first place though.
What are some alternatives?
RasaGPT - π¬ RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
sqlglot - Python SQL Parser and Transpiler
lmql - A language for constraint-guided and efficient LLM programming.
endoflife.date - Informative site with EoL dates of everything
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
scikit-learn-intelex - Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
pandas-ai - Chat with your database (SQL, CSV, pandas, polars, mongodb, noSQL, etc). PandasAI makes data analysis conversational using LLMs (GPT 3.5 / 4, Anthropic, VertexAI) and RAG.
objectiv-analytics - Open-source product analytics infrastructure for data teams that want full control. Built for high quality data collection and ready to use for advanced analytics & ML.
langchain - β‘ Building applications with LLMs through composability β‘ [Moved to: https://github.com/langchain-ai/langchain]
nature - π The Nature Programming Language, may you be able to experience the joy of programming.
rasa - π¬ Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
Skytrax-Data-Warehouse - A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for cloud data warehouse and Metabase to serve the needs of data visualizations such as analytical dashboards.