-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
1. OpenAI has done a good job of adding guardrails for its models. LLaMA Guard helped confirm this.
2. What makes this really cool is I may have a very specific set of policies I want to enforce ON TOP of the standard guardrails that a model ships with. LLaMA Guard makes this possible.
3. This kind of model chaining — passing responses from OpenAI models to LLaMA is becoming increasingly common, and I think we’ll have even more complex pipelines in the near future. It helped to have a consistent interface to store this multi-model pipeline as a config, especially because that same config also contains my safety taxonomy.
Try it out yourself:
GitHub: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/LLaMA-Guard
Colab: https://colab.research.google.com/drive/1CfF0Bzzkd5VETmhsniksSpekpS-LKYtX
YouTube: https://www.youtube.com/watch?v=XxggqoqIVdg
Would love the community's feedback on the overall approach.
Related posts
-
VS Code: Prompt Editor for LLMs (GPT4, Llama, Mistral, etc.)
-
Show HN: Gradio Notebook– Notebook UX for Any Generative AI in Hugging Face
-
Gradio Notebook – Generative AI Notebook Interface for Hugging Face Spaces
-
Prompt Routing with Zeroshot Technique -AiConfig
-
Trend Detection and Analysis with the AiConfig