Whitebox-Code-GPT
E2B
Whitebox-Code-GPT | E2B | |
---|---|---|
4 | 35 | |
196 | 6,256 | |
- | 2.4% | |
8.7 | 9.9 | |
7 months ago | 5 days ago | |
Dart | TypeScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Whitebox-Code-GPT
-
Open-source programming assistants
I’ve opened a repo to help accelerate AI programming assistants by open sourcing all of my instructions, knowledge files, and notes.
- Announcing Whitebox: The open-source community accelerating free programming assistants with GPT builder.
-
How To Make Money with ChatGPT
Speaking of which this post was sponsored by the Whitebox project ;) https://github.com/Decron/Whitebox-Code-GPT
-
Introduction
Our current goals are to define the largest blindspots in the default GPT models and write guides which can be used to improve functionality in those domains. If you would like a new GPT to be created, or would like to take custody of one, please open an issue with the title "New GPT request: " or "New GPT custody: "
Existing models:
Git assistant (Decron): https://chat.openai.com/g/g-8z4fiuUqu-git-assistant
Flutter GPT (Decron): https://chat.openai.com/g/g-u27ZCAhaF-flutter-gpt
Python GPT (Decron): https://chat.openai.com/g/g-c188mmoYi-python-gpt
C# (PrimeEagle): Coming soonRequesting Custodians for: Python (data science), Rust, Go, Unity.
This project is very new so please excuse the clutter. This is an exciting new opprotunity and we're working as fast as possible to accelerate the capabilities of these models.
How does it work?
- Background
AI models can accelerate a developer's abilities by suggesting improvments and providing context about technical details. A key flaw however is that they are not continuously up to date on best practices for every domain. Because of this, all models have blind spots that limit their full potential. This project aims to combat those flaws by creating knowledge files and instructions that are purpose-designed to fill the gaps of a model's knowledge. Purpose and Functionality
expanded context: The latest generation of multimodal LLM models have the capacity to parse through massive files that would typically overwhelm its context window. If information is structured correctly, this can vastly increase the amount of knowledge availible to a model when working in a known field.
Specialization: Each knowledge file is dedicated to a particular entity or topic, providing in-depth information about it. This could include historical data, technical specifications, or any relevant details.
Integration with GPT: These files are designed to be integrated into the GPT model's existing knowledge base, augmenting its ability to generate accurate and contextually relevant responses about the specific entities.
Content Organization: Information within these files is usually organized in a hierarchical or relational manner, allowing the model to understand the connections between different pieces of data.Creation and Maintenance
Data Sourcing: The information in these files is compiled from reliable sources, ensuring accuracy and relevancy. Experts for given frameworks are welcome to create new knowledge files or improvements to how models operate.
Regular Updates: To maintain the relevance of the information, these knowledge files are regularly updated with the latest data.
Quality Assurance: Rigorous checks are conducted to ensure accuracy of the information. A secondary goal of this project is to develop automated testing to ensure widespread functionality can be guarunteed for all models.Impact on GPT Performance
Enhanced Accuracy: By having direct access to detailed information, the GPT model can provide better and more accurate responses.
Efficiency in Data Retrieval: Since the data is structured and tailored for quick retrieval, the response time can be faster for queries related to these entities.
Customization: This approach allows for customization of the GPT model’s responses based on the specific requirements of the application or domain.Challenges and Considerations
Bias and Reliability: Care must be taken to avoid introducing biases into the GPT model through these knowledge files.
Scalability: As the number of entities increases, maintaining and updating these files can become challenging. We will rely on members of the community to support our growing ecosystem by taking custody of new models if additional specialization is requiredApplications
general: integrating enhanced GPT capabilities will significantly improve user experience, especially in applications where specialized knowledge is a key component of user interactions. The design should ensure seamless integration of knowledge files.
Industry-Specific Uses: For industries like healthcare, finance, or law, where specialized knowledge is vital, these files can greatly enhance the model's performance.
Custodial process:
Each bot is assigned a custodian to manage its state and field questions. They are the considered a subject matter expert for their given technology and are the sole decider of what content is included in the official model.
admin: The admin will assess possible candidates and grant ownership to the most qualified candidate. The admin is the sole decider of who is the official custodian of a bot but should seek out the opinions of the community before adding or revoking custodianship.
custodian: If you are interested in becoming a custodian, open an issue for the language or framework you wish to claim, and begin preparing your bot. Once you are granted access, duplicate the template folder and configure the files within to reflect the state of your bot.
admin: Once the bot is complete and a link is provided, the admin will update the directory in this file to include the new bot. The admin will then issue and close a pull request to update the main branch with the new model.
revoking custodianship: If a custodian wishes to forfeit custodianship of a bot, we ask that they participate in finding a suitable replacement. Once found, we will grant them access and update the directory to reflect the change of ownership.
revoking adminship: we'll cross that bridge when we come to it 😧
Making and maintaining bots:
Activity: Once custodianship is granted, you're free to update your bot however you see fit. We just ask that you make a reasonable effort to aggregate user requests and improve your model, especially during periods of high activity such as when a model is changed, or the major revision of a language is released.
Standards: The custodian has the final say in the name and description of a bot but we ask that they are both descriptive and that the description features a link to this repo. For instance: "Flutter development made easy. Maintained by The Hadrio Group at https://github.com/Decron/FlutterGPT"
Experimentation: It may be beneficial to create a backup bot to experiment with.
"I don't like reading isn't there just a GPT that will spoonfeed this to me?"
Yes: https://chat.openai.com/g/g-cwigWCh11-code-gpt-gpt
- Background
E2B
-
Ask HN: Who is hiring? (May 2024)
E2B | https://e2b.dev | San Francisco, CA | Full-time | In-person
[E2B](https://e2b.dev) is building a secure open-source runtime that will power next billion of AI apps & agents.
We found an early traction with making it easy for developers to add [code interpreting](https://github.com/e2b-dev/code-interpreter) to their AI apps with our SDK built on top of our [agentic runtime](https://github.com/e2b-dev/e2b). We have paying customers from seed to enterprise companies.
We're hiring:
- Frontend/Product Engineer
- Infrastructure Engineer
Check the roles here https://e2b.dev/careers
-
Show HN: Add AI code interpreter to any LLM via SDK
Hi, I'm the CEO of the company that built this SDK.
We're a company called E2B [0]. We're building and open-source [1] secure environments for running untrusted AI-generated code and AI agents. We call these environments sandboxes and they are built on top of micro VM called Firecracker [2].
You can think of us as giving small cloud computers to LLMs.
We recently created a dedicated SDK for building custom code interpreters in Python or JS/TS. We saw this need after a lot of our users have been adding code execution capabilities to their AI apps with our core SDK [3]. These use cases were often centered around AI data analysis so code interpreter-like behavior made sense
The way our code interpret SDK works is by spawning an E2B sandbox with Jupyter Server. We then communicate with this Jupyter server through Jupyter Kernel messaging protocol [4].
We don't do any wrapping around LLM, any prompting, or any agent-like framework. We leave all of that on users. We're really just a boring code execution layer that sats at the bottom that we're building specifically for the future software that will be building another software. We work with any LLM. Here's how we added code interpreter to Claude [5].
Our long-term plan is to build an automated AWS for AI apps and agents.
Happy to answer any questions and hear feedback!
[0] https://e2b.dev/
[1] https://github.com/e2b-dev
[2] https://github.com/firecracker-microvm/firecracker
[3] https://e2b.dev/docs
[4] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
[5] https://github.com/e2b-dev/e2b-cookbook/blob/main/examples/c...
- Open Source Python Code Interpreter for Any LLM
- Show HN: Open-Source Infrastructure for AI Code Interpreters
-
We're building cloud runtime for AI agents and gradually open-sourcing everything
Hey folks, we're building an open source runtime for AI agents at E2B.
- Show HN: Run LLM-generated code in sandboxed envs
- Sandboxed cloud environments for AI agents & apps with a single line of code
- We're building a cloud for AI agents & AI apps, It's free and we're gradually open-sourcing the infra. Would love to hear your feedback!
- [P] We're building a cloud for AI agents & AI apps, It's free and we're gradually open-sourcing the infra. Would love to hear your feedback!
What are some alternatives?
Flutter-AI-Rubik-cube-Solver - Flutter-Python rubiks cube solver.
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
chatgpt-shell - ChatGPT and DALL-E Emacs shells + Org babel 🦄 + a shell maker for other providers
magentic - Seamlessly integrate LLMs as Python functions
IncognitoPilot - An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
Selefra - The open-source policy-as-code software that provides analysis for Multi-Cloud and SaaS environments, you can get insight with natural language (powered by OpenAI).
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
rapidpages - Generate React and Tailwind components using AI
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
awesome-chatgpt - 🧠 A curated list of awesome ChatGPT resources, including libraries, SDKs, APIs, and more. 🌟 Please consider supporting this project by giving it a star.