marvin
the-algorithm-ml
Our great sponsors
marvin | the-algorithm-ml | |
---|---|---|
16 | 36 | |
4,601 | 9,863 | |
6.3% | 0.4% | |
9.9 | 10.0 | |
about 6 hours ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marvin
-
Show HN: Magentic – Use LLMs as simple Python functions
Seems a lot like https://github.com/PrefectHQ/marvin?
The prompting you do seems an awfully like:
Yes, similar ideas. Marvin [asks the LLM to mimic the python function](https://github.com/PrefectHQ/marvin/blob/f37ad5b15e2e77dd998...), whereas in magentic the function signature just represents the inputs/outputs to the prompt-template/LLM, so the LLM “doesn’t know” that it is pretending to be a python function - you specify all the prompts.
-
4-Apr-2023
Marvin: a batteries-included library for building AI-powered software. Marvin's job is to integrate AI directly into your codebase by making it look and feel like any other function (https://github.com/PrefectHQ/marvin)
-
Magic - AI functions for Typescript
Sure! I was inspired by this Python library: https://github.com/PrefectHQ/marvin
-
Show HN: A ChatGPT TUI with custom bots
I see Langchain has support for Azure chat models, and Marvin is built on Langchain so it may not be so difficult! Tracking issue here: https://github.com/PrefectHQ/marvin/issues/189
- FLaNK Stack Weekly 3 April 2023
-
Show HN: Marvin – build AI functions that use an LLM as a runtime
Check out this example from the docs to see how to take a URL as argument and then pass content to the LLM: https://www.askmarvin.ai/guide/concepts/ai_functions/#sugges...
(The previous example is also good)
A few things you could consider:
1. We have a utility for getting content out of HTML at marvin.utilities.strings.html_to_content. That would probably significantly compress it.
2. Chunk the HTML into batches that fit in context, send each over with an AI function that summarizes it (you could instruct the AI function to optimize the summary to help with title generation), then send all the resulting summaries to a title generator
3. We have a suite of HTML loader classes that will probably be ready for production in a couple releases (see https://github.com/PrefectHQ/marvin/blob/main/src/marvin/loa...) but you could try them out now (note: these use parts of Marvin beyond just AI functions, so I'm not recommending it as a drop-in right now). Our loader classes are (ideally) designed to do more than just chunk the input; depending on the nature of the input we do different preprocessing steps to help with insight.
4. Experiment and let us know what you learn - we can incorporate it into a loader class if its effective
Here https://github.com/PrefectHQ/marvin/blob/main/examples/end-t... the prompt says
instructions=(
Hi!
This example was produced using GPT 3.5 turbo, where yes, the LLM does not always align ideally. I used 3.5 for the example since that's Marvin's default and I know many people wouldn't have gpt4 access yet (which is significantly better at following instructions) - didn't want to set a misleading expectation.
that said, my instructions for the bot in this example certainly could have been more precise :) for a more real example, you could check out the other example (which works pretty well on 3.5) https://github.com/PrefectHQ/marvin/blob/main/examples/load_...
Thanks!
Caching is highly requested! We have an issue open (https://github.com/PrefectHQ/marvin/issues/102) and expect to tackle it soon.
You can set temperature as a setting today (sorry we haven't documented all the settings yet) by setting the env var `MARVIN_OPENAI_MODEL_TEMPERATURE=0.2` or at runtime with `marvin.settings.openai_model_temperature=0.2`. Note the temperature is set when a bot / ai_fn is created, not when it's called, so you need to do this early.
the-algorithm-ml
-
AOC said Elon Musk put his 'finger on the scale' during Turkey's presidential election and is 'concerned' it will set a precedent for the 2024 US election
Blog summarising the change: https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm
-
Twitter's For You Recommendation Algorithm
Twitter's announcement | Main GitHub Repo | ML GitHub Repo | Engineering Blog Post
- FLaNK Stack Weekly 3 April 2023
-
Analysis of Twitter algorithm code reveals social medium down-ranks tweets about Ukraine
They have made a major part of the code source available recently: the algorithm. However there are at least three issues with calling this an "open source Twitter":
-
Something tells me Twitter isn’t going to get anything useful from their GitHub issues
link
-
Twitter released the source code for the algorithm that recommends tweets
Yeah, that's a basic problem, plus the neural network in the middle is just a recipe for a neural network, without any of the training weights or anything, because it is the data, people's personal data, that this would embody, that is the source of wealth for the network.
- Twitter's Recommendation Algorithm
-
[News] Twitter algorithm now open source
Repo for their recommendation-engine: https://github.com/twitter/the-algorithm-ml
What are some alternatives?
the-algorithm
Finagle - A fault tolerant, protocol-agnostic RPC system
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
cointop - A fast and lightweight interactive terminal based UI application for tracking cryptocurrencies 🚀
bpytop - Linux/OSX/FreeBSD resource monitor
Apollo-11 - Original Apollo 11 Guidance Computer (AGC) source code for the command and lunar modules.
aide - LLM shell and document interogator
lazydocker - The lazier way to manage everything docker
ctop - Top-like interface for container metrics
use_gpt_as_programming_lang - use gpt as programming language
zoxide - A smarter cd command. Supports all major shells.