Jupyter Notebook chatgpt

Open-source Jupyter Notebook projects categorized as chatgpt

Top 18 Jupyter Notebook chatgpt Projects

  • openai-cookbook

    Examples and guides for using the OpenAI API

    Project mention: Using GPT-4 as “reference-free” evaluators | news.ycombinator.com | 2023-08-17
  • FinGPT

    Data-Centric FinGPT. Open-source for open finance! Revolutionize 🔥 We release the trained model on HuggingFace.

    Project mention: FLaNK Stack Weekly for 20 June 2023 | dev.to | 2023-06-20
  • Mergify

    Updating dependencies is time-consuming.. Solutions like Dependabot or Renovate update but don't merge dependencies. You need to do it manually while it could be fully automated! Add a Merge Queue to your workflow and stop caring about PR management & merging. Try Mergify for free.

  • awesome-generative-ai

    A curated list of Generative AI tools, works, models, and references (by filipecalegario)

    Project mention: Generative AI – A curated list of Generative AI tools, works, models | news.ycombinator.com | 2023-07-14
  • chameleon-llm

    Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".

    Project mention: Giving GPT “Infinite” Knowledge | news.ycombinator.com | 2023-05-08

    > Do you know any active research in this area? I briefly considered playing with this, but my back-of-the-envelope semi-educated feeling for now is that it won't scale.

    I am aware of a couple of potentially promising research directions. One formally academic called Chameleon [0], and one that's more like a grassroots organic effort that aims to build an actually functional Auto-GPT-like, called Agent-LLM [1]. I have read the Chameleon paper, and I must say I'm quite impressed with their architecture. It added a few bits and pieces that most of the early GPT-based agents didn't have, and I have a strong intuition that these will contribute to these things actually working.

    Auto-GPT is another, relatively famous piece of work in this area. However, at least as of v0.2.2, I found it relatively underwhelming. For any online knowledge retrieval+synthesis and retrieval+usage tasks, it seemed to get stuck, but it did sort-of-kind-of OK on plain online knowledge retrieval. After having a look at the Auto-GPT source code, my intuition (yes, I know - "fuzzy feelings without a solid basis" - but I believe that this is simply due to not having an AI background to explain this with crystal-clear wording) is that the poor performance of the current version of Auto-GPT is insufficient skill in prompt-chain architecture and the surprisingly low quality and at times buggy code.

    I think Auto-GPT has some potential. I think the implementation lets down the concept, but that's just a question of refactoring the prompts and the overall code - which it seems like the upstream Github repo has been quite busy with, so I might give it another go in a couple of weeks to see how far it's moved forward.

    > Specifically, as task complexity grows, the amount of results to combine will quickly exceed the context window size of the "combiner" GPT-4. Sure, you can stuff another layer on top, turning it into a tree/DAG, but eventually, I think the partial result itself will be larger than 8k, or even 32k tokens - and I feel this "eventually" will be hit rather quickly. But maybe my feelings are wrong and there is some mileage in this approach.

    Auto-GPT uses an approach based on summarisation and something I'd term 'micro-agents'. For example, when Auto-GPT is searching for an answer to a particular question online, for each search result it finds, it spins up a sub-chain that gets asked a question 'What does this page say about X?' or 'Based on the contents of this page, how can you do Y?'. Ultimately, intelligence is about lossy compression, and this is a starkly exposed when it comes to LLMs because you have no choice but to lose some information.

    > I think the partial result itself will be larger than 8k, or even 32k tokens - and I feel this "eventually" will be hit rather quickly. But maybe my feelings are wrong and there is some mileage in this approach.

    The solution to that would be to synthesize output section by section, or even as an "output stream" that can be captured and/or edited outside the LLM in whole or in chunks. IMO, I do think there's some mileage to be exploited in a recursive "store, summarise, synthesise" approach, but the problem will be that of signal loss. Every time you pass a subtask to a sub-agent, or summarise the outcome of that sub-agent into your current knowledge base, some noise is introduced. It might be that the signal to noise ratio will dissipate as higher and higher order LLM chains are used - analogously to how terrible it was to use electricity or radio waves before any amplification technology became available.

    One possible avenue to explore to crack down on decreasing SNR (based on my own original research, but I can also see some people disclosing online that they are exploring the same path), is to have a second LLM in the loop, double-checking the result of the first one. This has some limitations, but I have successfully used this approach to verify that, for example, the LLM does not outright refuse to carry out a task. This is currently cost-prohibitive to do in a way that would make me personally satisfied and confident enough in the output to make it run full-auto, but I expect that increasing ability to run AI locally will make people more willing to experiment with massive layering of cooperating LLM chains that check each others' work, cooperate, and/or even repeat work using different prompts to pick the best output a la redundant avionics computers.

    [0]: https://github.com/lupantech/chameleon-llm

  • hackGPT

    I leverage OpenAI and ChatGPT to do hackerish things

    Project mention: This is a Ghidra script that calls OPENAI to give meaning to decompiled functions. Another level of reverse engineering. | /r/redteamsec | 2023-05-09

    Also check https://github.com/NoDataFound/hackGPT

  • LongNet

    Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"

    Project mention: LongLlama | /r/LocalLLaMA | 2023-07-07

    If you want to talk immature looking, longnet wouldn't even compile. That's a big oof, considering it's a python and usually nonworking code is good enough to generate byte code. (also it has hard-coded dtype and device)

  • Get-Things-Done-with-Prompt-Engineering-and-LangChain

    LangChain & Prompt Engineering tutorials on Large Language Models (LLMs) such as ChatGPT with custom data. Jupyter notebooks on loading and indexing data, creating prompt templates, CSV agents, and using retrieval QA chains to query the custom data. Projects for using a private LLM (Llama 2) for chat with PDF files, tweets sentiment analysis.

    Project mention: Get-Things-Done-with-Prompt-Engineering-and-LangChain: NEW Data - star count:383.0 | /r/algoprojects | 2023-09-28
  • SonarLint

    Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.

  • ClassGPT

    ChatGPT for lecture slides

    Project mention: looking for a free chatbot with long term memory! | /r/OpenAI | 2023-03-06
  • voice-assistant-whisper-chatgpt

    This repository will guide you to create your own Smart Virtual Assistant like Google Assistant using Open AI's ChatGPT, Whisper. The entire solution is created using Python & Gradio.

    Project mention: Voice Assistant Chat bot? | /r/GPT3 | 2022-12-29

    First of all, credit goes to this guy for the speech>text>ChatGPT code. The rest is just example code from revChatGPT/OpenAIAuth/GoogleCloud docs.

  • langforge

    A Toolkit for Creating and Deploying LangChain Apps

    Project mention: Langforge: A Toolkit for Creating and Deploying LangChain Apps | news.ycombinator.com | 2023-04-27
  • FastLoRAChat

    Instruct-tune LLaMA on consumer hardware with shareGPT data

    Project mention: [P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data | /r/MachineLearning | 2023-04-18

    Announcing FastLoRAChat , training chatGPT without A100.

  • autogen

    Enable Next-Gen Large Language Model Applications. Join our Discord: https://discord.gg/pAbnFJrkgZ

    Project mention: AutoGen: Enabling next-generation large language model applications | news.ycombinator.com | 2023-09-26
  • ChatGPT-Python-Applications

    ChatGPT Python Applications integrated with third party libraries and modules

    Project mention: IBM-er shared 50 Python Projects (Trending GitHub) | dev.to | 2023-04-16

    ⭐ custom-chatbot : ask chatbot to do custom work on the bases of the task (eg. script writer)

  • ChatLog

    ⏳ ChatLog: Recording and Analysing ChatGPT Across Time (by THU-KEG)

    Project mention: ChatLog: Recording and Analyzing ChatGPT Across Time | /r/BotNews | 2023-04-28

    While there are abundant researches about evaluating ChatGPT on natural language understanding and generation tasks, few studies have investigated how ChatGPT's behavior changes over time. In this paper, we collect a coarse-to-fine temporal dataset called ChatLog, consisting of two parts that update monthly and daily: ChatLog-Monthly is a dataset of 38,730 question-answer pairs collected every month including questions from both the reasoning and classification tasks. ChatLog-Daily, on the other hand, consists of ChatGPT's responses to 1000 identical questions for long-form generation every day. We conduct comprehensive automatic and human evaluation to provide the evidence for the existence of ChatGPT evolving patterns. We further analyze the unchanged characteristics of ChatGPT over time by extracting its knowledge and linguistic features. We find some stable features to improve the robustness of a RoBERTa-based detector on new versions of ChatGPT. We will continuously maintain our project at https://github.com/THU-KEG/ChatLog.

  • MusicWithChatGPT

    Tips and tools for writing music with the aid of ChatGPT

    Project mention: Writing music with ChatGPT | /r/ChatGPT | 2023-02-22

    I have created a Github repository with some tips and tools, including a Colab notebook to quickly copy-paste any ABC notation from ChatGPT and instantly download it as a MIDI file. I'm planning to collect other good tips and tools in there as I figure them out or they come along elsewhere.

  • awesome-chatgpt-plugins

    A curated list of all of the ChatGPT plugins available within ChatGPT plus, includes detailed descriptions and usage docs, as well as unofficial sources of plugins (by HighwayofLife)

    Project mention: The ChatGPT Plugin Descriptions are Terrible, so I fixed them | /r/coolaitools | 2023-07-21
  • BLOOM-fine-tuning

    Finetune BLOOM

    Project mention: Bloom-fine-tuning with Stanford Alpaca | news.ycombinator.com | 2023-03-20
  • chatlab

    Bringing ChatGPT Plugins to your notebooks

    Project mention: ChatGPT Plugins for Jupyter Notebook | news.ycombinator.com | 2023-08-06
  • InfluxDB

    Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2023-09-28.

Jupyter Notebook chatgpt related posts

Index

What are some of the best open-source chatgpt projects in Jupyter Notebook? This list will help you:

Project Stars
1 openai-cookbook 48,583
2 FinGPT 8,516
3 awesome-generative-ai 1,349
4 chameleon-llm 889
5 hackGPT 583
6 LongNet 579
7 Get-Things-Done-with-Prompt-Engineering-and-LangChain 400
8 ClassGPT 194
9 voice-assistant-whisper-chatgpt 192
10 langforge 153
11 FastLoRAChat 117
12 autogen 97
13 ChatGPT-Python-Applications 96
14 ChatLog 87
15 MusicWithChatGPT 75
16 awesome-chatgpt-plugins 72
17 BLOOM-fine-tuning 37
18 chatlab 34
Collect and Analyze Billions of Data Points in Real Time
Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
www.influxdata.com