openai-node
aiconfig
openai-node | aiconfig | |
---|---|---|
22 | 29 | |
7,017 | 862 | |
3.7% | 5.6% | |
9.5 | 9.7 | |
3 days ago | 7 days ago | |
TypeScript | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openai-node
-
Website Optimization Using Strapi, Astro.js and OpenAI
Okay, now we've confirmed the API endpoint is working, let's connect it to OpenAI first, install the OpenAI package, navigate to the route directory, and run the command below in our terminal
-
JSON {} With OpenAI 🤖✨
For my setup, I am using the node version of the openai sdk.
-
The Stainless SDK Generator
We try to keep it to a minimum, especially in JS (though we have some nice improvements coming soon when we deprecate node-fetch in favor of built-in fetch). The package sizes aren't tiny because we include thorough types and sourcemaps, but the bundle sizes are fairly tidy.
Here's an example of a typical RESTful endpoint (Lithic's `client.cards.create()`:
https://github.com/lithic-com/lithic-node/blob/36d4a6a70597e...
Here are some example repos produced by Stainless:
1. https://github.com/openai/openai-node
-
OpenAI: Streaming is now available in the Assistants API
Have you seen/tried the `.runTools()` helper?
Docs: https://github.com/openai/openai-node?tab=readme-ov-file#aut...
Example: https://github.com/openai/openai-node/blob/bb4bce30ff1bfb06d...
(if what you're fundamentally trying to do is really just get JSON out, then I can see how json_mode is still easier).
-
OpenAI has Text to Speech Support now!
And so, I impulsively upgraded to the latest version of openai (I guess not anymore) without the fear of getting cut by cutting edge 😝 and got it working for some random text
-
AI for Web Devs: Faster Responses with HTTP Streaming
UPDATE 2023/11/15: I used fetch and custom streams because at the time of writing, the openai module on NPM did not properly support streaming responses. That issue has been fixed, and I think a better solution would be to use that module and pipe their data through a TransformStream to send to the client. That version is not reflected here.
-
AI for Web Devs: Your First API Request to OpenAI
You may notice the JavaScript package available on NPM called openai. We will not be using this, as it doesn’t quite support some things we’ll want to do, that fetch can.
-
Building and deploying AI agents with E2B
openai - For using the GPT-3.5-turbo model to answer the questions
-
Aiconfig – source control format for gen AI prompts, models and settings
We have a bit of context about this in the readme: https://github.com/lastmile-ai/aiconfig#what-problem-it-solv.... The main issue with keeping it in code is that it tangles application code with prompts and model-specific logic.
That makes it hard to evaluate the genAI parts of the application, and also iterating on the prompts is not as straightforward as opening up a playground.
Having the config be the source of truth let's you connect it to your application code (and still source controlled), lets you evaluate the config as the AI artifact, and also lets you open the config in a playground to edit and iterate.
For example, compare how much simpler openai function calling becomes with storing the stuff as a config: https://github.com/lastmile-ai/aiconfig/blob/main/cookbooks/... vs using vanilla openai directly (https://github.com/openai/openai-node/blob/v4/examples/funct...)
-
Build a Chatbot With OpenAI, Vercel AI and Xata
In your preferred serverless environment, make sure you install the OpenAI API Library and Vercel AI library to get started.
aiconfig
-
VS Code: Prompt Editor for LLMs (GPT4, Llama, Mistral, etc.)
doesn't collect prompts and there's a way to disable telemetry as well - https://github.com/lastmile-ai/aiconfig/blob/8a5a59d47cef474...
-
Show HN: Gradio Notebook– Notebook UX for Any Generative AI in Hugging Face
You can! We artificially limited that in the gradio notebook component right now, but if you check out the core package it’s built on (aiconfig), it shows how to use OpenAI, Claude, and other model providers:
https://github.com/lastmile-ai/aiconfig
Here’s the currently supported list of models: https://aiconfig.lastmileai.dev/docs/overview/model-parsers
Let me know if we should enable non-HF models in the gradio notebook component by default as well — it should be a simple change
- Gradio Notebook – Generative AI Notebook Interface for Hugging Face Spaces
-
Prompt Routing with Zeroshot Technique -AiConfig
AiConfig Web AiConfig Open Source
-
Trend Detection and Analysis with the AiConfig
One of the primary reasons for using the AiConfig is its ease to use and highly configurable nature of prompts and its configurations, including the configurable LLM usages etc.
-
Playground for Generative AI
Check out prompt templates and the AIConfig framework here: https://github.com/lastmile-ai/aiconfig
-
Master Prompt Engineering with OpenAI 🧠
Please give our AIConfig repo a star to support the project! https://github.com/lastmile-ai/aiconfig
-
Show HN: Microagents: Agents Capable of Self-Editing Their Prompts / Python Code
Take a look at https://github.com/lastmile-ai/aiconfig for refactoring your prompt management - https://github.com/aymenfurter/microagents/tree/main/prompt_...
-
Master LLM Hallucinations 💭
Show your support by starring our project on GitHub! ⭐️ https://github.com/lastmile-ai/aiconfig
-
Harness the power of multiple LLMs
⭐️ We recently launched AIConfig as our first open-source project. Please support us with a star ⭐️ on Github! https://github.com/lastmile-ai/aiconfig/tree/main
What are some alternatives?
liboai - A C++17 library to access the entire OpenAI API.
speech-ai - A Python package that generates conversational speech from text using a combination of Generative AI models and various text-to-speech engines, enabling applications to produce dynamic and contextually aware spoken responses.
openai-python - The official Python library for the OpenAI API
LLMStack - No-code platform to build LLM Agents, workflows and applications with your data
fern - 🌿 Stripe-level SDKs and Docs for your API
Stable-Diffusion-Latent-Space-Explorer - Codebase for performing various experiments with Stable Diffusion, supported by the diffusers library.
vrite - Open-source developer content platform
microagents - Agents Capable of Self-Editing Their Prompts / Python Code
tiptap - The headless rich text editor framework for web artisans.
SlothAI - A simple, but deceptively fast document pipeline manager for AI. Runs Python on AppEngine.
ai - Build AI-powered applications with React, Svelte, Vue, and Solid [Moved to: https://github.com/vercel/ai]
llm-client-sdk - SDK for using LLM