Jupyter Notebook gpt-4

Open-source Jupyter Notebook projects categorized as gpt-4

Top 16 Jupyter Notebook gpt-4 Projects

  • autogen

    A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap

  • Project mention: FLaNK AI Weekly 25 March 2025 | dev.to | 2024-03-25
  • Promptify

    Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research

  • Project mention: Promptify 2.0: More Structured, More Powerful LLMs with Prompt-Optimization, Prompt-Engineering, and Structured Json Parsing with GPT-n Models! 🚀 | /r/ArtificialInteligence | 2023-07-31

    First up, a huge Thank You for making Promptify a hit with over 2.3k+ stars on Github ! 🌟

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • awesome-generative-ai

    A curated list of Generative AI tools, works, models, and references (by filipecalegario)

  • Project mention: Generative AI – A curated list of Generative AI tools, works, models | news.ycombinator.com | 2023-07-14
  • chameleon-llm

    Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".

  • Project mention: Giving GPT “Infinite” Knowledge | news.ycombinator.com | 2023-05-08

    > Do you know any active research in this area? I briefly considered playing with this, but my back-of-the-envelope semi-educated feeling for now is that it won't scale.

    I am aware of a couple of potentially promising research directions. One formally academic called Chameleon [0], and one that's more like a grassroots organic effort that aims to build an actually functional Auto-GPT-like, called Agent-LLM [1]. I have read the Chameleon paper, and I must say I'm quite impressed with their architecture. It added a few bits and pieces that most of the early GPT-based agents didn't have, and I have a strong intuition that these will contribute to these things actually working.

    Auto-GPT is another, relatively famous piece of work in this area. However, at least as of v0.2.2, I found it relatively underwhelming. For any online knowledge retrieval+synthesis and retrieval+usage tasks, it seemed to get stuck, but it did sort-of-kind-of OK on plain online knowledge retrieval. After having a look at the Auto-GPT source code, my intuition (yes, I know - "fuzzy feelings without a solid basis" - but I believe that this is simply due to not having an AI background to explain this with crystal-clear wording) is that the poor performance of the current version of Auto-GPT is insufficient skill in prompt-chain architecture and the surprisingly low quality and at times buggy code.

    I think Auto-GPT has some potential. I think the implementation lets down the concept, but that's just a question of refactoring the prompts and the overall code - which it seems like the upstream Github repo has been quite busy with, so I might give it another go in a couple of weeks to see how far it's moved forward.

    > Specifically, as task complexity grows, the amount of results to combine will quickly exceed the context window size of the "combiner" GPT-4. Sure, you can stuff another layer on top, turning it into a tree/DAG, but eventually, I think the partial result itself will be larger than 8k, or even 32k tokens - and I feel this "eventually" will be hit rather quickly. But maybe my feelings are wrong and there is some mileage in this approach.

    Auto-GPT uses an approach based on summarisation and something I'd term 'micro-agents'. For example, when Auto-GPT is searching for an answer to a particular question online, for each search result it finds, it spins up a sub-chain that gets asked a question 'What does this page say about X?' or 'Based on the contents of this page, how can you do Y?'. Ultimately, intelligence is about lossy compression, and this is a starkly exposed when it comes to LLMs because you have no choice but to lose some information.

    > I think the partial result itself will be larger than 8k, or even 32k tokens - and I feel this "eventually" will be hit rather quickly. But maybe my feelings are wrong and there is some mileage in this approach.

    The solution to that would be to synthesize output section by section, or even as an "output stream" that can be captured and/or edited outside the LLM in whole or in chunks. IMO, I do think there's some mileage to be exploited in a recursive "store, summarise, synthesise" approach, but the problem will be that of signal loss. Every time you pass a subtask to a sub-agent, or summarise the outcome of that sub-agent into your current knowledge base, some noise is introduced. It might be that the signal to noise ratio will dissipate as higher and higher order LLM chains are used - analogously to how terrible it was to use electricity or radio waves before any amplification technology became available.

    One possible avenue to explore to crack down on decreasing SNR (based on my own original research, but I can also see some people disclosing online that they are exploring the same path), is to have a second LLM in the loop, double-checking the result of the first one. This has some limitations, but I have successfully used this approach to verify that, for example, the LLM does not outright refuse to carry out a task. This is currently cost-prohibitive to do in a way that would make me personally satisfied and confident enough in the output to make it run full-auto, but I expect that increasing ability to run AI locally will make people more willing to experiment with massive layering of cooperating LLM chains that check each others' work, cooperate, and/or even repeat work using different prompts to pick the best output a la redundant avionics computers.

    [0]: https://github.com/lupantech/chameleon-llm

  • Get-Things-Done-with-Prompt-Engineering-and-LangChain

    LangChain & Prompt Engineering tutorials on Large Language Models (LLMs) such as ChatGPT with custom data. Jupyter notebooks on loading and indexing data, creating prompt templates, CSV agents, and using retrieval QA chains to query the custom data. Projects for using a private LLM (Llama 2) for chat with PDF files, tweets sentiment analysis.

  • Project mention: Get-Things-Done-with-Prompt-Engineering-and-LangChain: NEW Data - star count:617.0 | /r/algoprojects | 2023-12-10
  • miyagi

    Sample to envision intelligent apps with Microsoft's Copilot stack for AI-infused product experiences.

  • Project mention: Project Miyagi – Financial Coach | news.ycombinator.com | 2023-05-09
  • sports

    Cool experiments at the intersection of Computer Vision and Sports ⚽🏃

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • Azure-Cognitive-Search-Azure-OpenAI-Accelerator

    Virtual Assistant - GPT Smart Search Engine - Bot Framework + Azure OpenAI + Azure AI Search + Azure SQL + Bing API + Azure Document Intelligence + LangChain + CosmosDB

  • Project mention: Microsoft AI chat bot inside MS Teams | /r/PowerPlatform | 2023-07-10

    This is pretty much it: https://github.com/MSUSAzureAccelerators/Azure-Cognitive-Search-Azure-OpenAI-Accelerator

  • generative-manim

    🎨 GPT-4 for video generation ⚡️

  • Project mention: Intuitive Guide to Convolution | news.ycombinator.com | 2023-12-04

    https://github.com/360macky/generative-manim :

    > Generative Manim is a prototype of a web app that uses GPT-4 to generate videos with Manim. The idea behind this project is taking advantage of the power of GPT-4 in programming, the understanding of human language and the animation capabilities of Manim to generate a tool that could be used by anyone to create videos. Regardless of their programming or video editing skills.

    "TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/wenhuchen/TheoremQA#leaderboard

    How do you score memory retention and video watching comprehension? The classic educators' optimization challenge

    "Khan Academy’s 7-Step Approach to Prompt Engineering for Khanmigo"

  • langforge

    A Toolkit for Creating and Deploying LangChain Apps

  • Smarty-GPT

    A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner to the end-users

  • awesome-chatgpt-plugins

    A curated and categeorized list of all of the ChatGPT plugins available within ChatGPT plus, includes detailed descriptions and usage docs, as well as unofficial sources of plugins (by HighwayofLife)

  • Project mention: The ChatGPT Plugin Descriptions are Terrible, so I fixed them | /r/coolaitools | 2023-07-21
  • OpenAI-Assistants-Template

    Build and deploy AI-driven assistants with our OpenAI Assistants Template. This tutorial provides a hands-on approach to using OpenAI's Assistant API, complete with code modules, interactive Jupyter Notebook examples, and best practices to get you started on creating intelligent conversational agents.

  • Project mention: OpenAI Assistant Template | dev.to | 2024-03-15
  • TinyStories

    code to train a gpt-2 model to train it on tiny stories dataset according to the TinyStories paper

  • Project mention: [P] Code to config a model similar to TinyStories paper | /r/MachineLearning | 2023-05-19

    Take a look: https://github.com/sleepingcat4/TinyStories

  • ai_book

    AI book for everyone

  • staplechain

    Signed, In-band Annotations for Language Model Outputs

  • Project mention: Show HN: StapleChain – Signed, in-band annotations for language model outputs | news.ycombinator.com | 2023-05-01
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Jupyter Notebook gpt-4 related posts

Index

What are some of the best open-source gpt-4 projects in Jupyter Notebook? This list will help you:

Project Stars
1 autogen 24,917
2 Promptify 3,020
3 awesome-generative-ai 1,971
4 chameleon-llm 1,017
5 Get-Things-Done-with-Prompt-Engineering-and-LangChain 943
6 miyagi 616
7 sports 438
8 Azure-Cognitive-Search-Azure-OpenAI-Accelerator 278
9 generative-manim 201
10 langforge 163
11 Smarty-GPT 142
12 awesome-chatgpt-plugins 126
13 OpenAI-Assistants-Template 67
14 TinyStories 27
15 ai_book 18
16 staplechain 10

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com