SaaSHub helps you find the best software and product alternatives Learn more →
Marvin Alternatives
Similar projects and alternatives to marvin
-
-
cointop
A fast and lightweight interactive terminal based UI application for tracking cryptocurrencies 🚀
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
-
-
-
-
glances
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
-
-
-
-
aria2
aria2 is a lightweight multi-protocol & multi-source, cross platform download utility operated in command-line. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink.
-
textual
Textual is a Rapid Application Development framework for Python. Build sophisticated user interfaces with a simple Python API. Run your apps in the terminal and (coming soon) a web browser!
-
-
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
marvin reviews and mentions
-
4-Apr-2023
Marvin: a batteries-included library for building AI-powered software. Marvin's job is to integrate AI directly into your codebase by making it look and feel like any other function (https://github.com/PrefectHQ/marvin)
-
Magic - AI functions for Typescript
Sure! I was inspired by this Python library: https://github.com/PrefectHQ/marvin
-
Show HN: A ChatGPT TUI with custom bots
I see Langchain has support for Azure chat models, and Marvin is built on Langchain so it may not be so difficult! Tracking issue here: https://github.com/PrefectHQ/marvin/issues/189
- FLaNK Stack Weekly 3 April 2023
-
Show HN: Marvin – build AI functions that use an LLM as a runtime
Check out this example from the docs to see how to take a URL as argument and then pass content to the LLM: https://www.askmarvin.ai/guide/concepts/ai_functions/#sugges...
(The previous example is also good)
A few things you could consider:
1. We have a utility for getting content out of HTML at marvin.utilities.strings.html_to_content. That would probably significantly compress it.
2. Chunk the HTML into batches that fit in context, send each over with an AI function that summarizes it (you could instruct the AI function to optimize the summary to help with title generation), then send all the resulting summaries to a title generator
3. We have a suite of HTML loader classes that will probably be ready for production in a couple releases (see https://github.com/PrefectHQ/marvin/blob/main/src/marvin/loa...) but you could try them out now (note: these use parts of Marvin beyond just AI functions, so I'm not recommending it as a drop-in right now). Our loader classes are (ideally) designed to do more than just chunk the input; depending on the nature of the input we do different preprocessing steps to help with insight.
4. Experiment and let us know what you learn - we can incorporate it into a loader class if its effective
Here https://github.com/PrefectHQ/marvin/blob/main/examples/end-t... the prompt says
instructions=(
Hi!
This example was produced using GPT 3.5 turbo, where yes, the LLM does not always align ideally. I used 3.5 for the example since that's Marvin's default and I know many people wouldn't have gpt4 access yet (which is significantly better at following instructions) - didn't want to set a misleading expectation.
that said, my instructions for the bot in this example certainly could have been more precise :) for a more real example, you could check out the other example (which works pretty well on 3.5) https://github.com/PrefectHQ/marvin/blob/main/examples/load_...
Thanks!
Caching is highly requested! We have an issue open (https://github.com/PrefectHQ/marvin/issues/102) and expect to tackle it soon.
You can set temperature as a setting today (sorry we haven't documented all the settings yet) by setting the env var `MARVIN_OPENAI_MODEL_TEMPERATURE=0.2` or at runtime with `marvin.settings.openai_model_temperature=0.2`. Note the temperature is set when a bot / ai_fn is created, not when it's called, so you need to do this early.
We have a related issue open (https://github.com/PrefectHQ/marvin/issues/64) but haven't designed anything yet.
-
A note from our sponsor - #<SponsorshipServiceOld:0x00007f0920c39c88>
www.saashub.com | 8 Jun 2023
Stats
PrefectHQ/marvin is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of marvin is Python.