marvin
tmate
Our great sponsors
marvin | tmate | |
---|---|---|
16 | 37 | |
4,661 | 5,505 | |
4.8% | 0.8% | |
9.9 | 0.0 | |
7 days ago | 6 months ago | |
Python | C | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marvin
-
Show HN: Magentic – Use LLMs as simple Python functions
Seems a lot like https://github.com/PrefectHQ/marvin?
The prompting you do seems an awfully like:
Yes, similar ideas. Marvin [asks the LLM to mimic the python function](https://github.com/PrefectHQ/marvin/blob/f37ad5b15e2e77dd998...), whereas in magentic the function signature just represents the inputs/outputs to the prompt-template/LLM, so the LLM “doesn’t know” that it is pretending to be a python function - you specify all the prompts.
-
4-Apr-2023
Marvin: a batteries-included library for building AI-powered software. Marvin's job is to integrate AI directly into your codebase by making it look and feel like any other function (https://github.com/PrefectHQ/marvin)
-
Magic - AI functions for Typescript
Sure! I was inspired by this Python library: https://github.com/PrefectHQ/marvin
-
Show HN: A ChatGPT TUI with custom bots
I see Langchain has support for Azure chat models, and Marvin is built on Langchain so it may not be so difficult! Tracking issue here: https://github.com/PrefectHQ/marvin/issues/189
- FLaNK Stack Weekly 3 April 2023
-
Show HN: Marvin – build AI functions that use an LLM as a runtime
Check out this example from the docs to see how to take a URL as argument and then pass content to the LLM: https://www.askmarvin.ai/guide/concepts/ai_functions/#sugges...
(The previous example is also good)
A few things you could consider:
1. We have a utility for getting content out of HTML at marvin.utilities.strings.html_to_content. That would probably significantly compress it.
2. Chunk the HTML into batches that fit in context, send each over with an AI function that summarizes it (you could instruct the AI function to optimize the summary to help with title generation), then send all the resulting summaries to a title generator
3. We have a suite of HTML loader classes that will probably be ready for production in a couple releases (see https://github.com/PrefectHQ/marvin/blob/main/src/marvin/loa...) but you could try them out now (note: these use parts of Marvin beyond just AI functions, so I'm not recommending it as a drop-in right now). Our loader classes are (ideally) designed to do more than just chunk the input; depending on the nature of the input we do different preprocessing steps to help with insight.
4. Experiment and let us know what you learn - we can incorporate it into a loader class if its effective
Here https://github.com/PrefectHQ/marvin/blob/main/examples/end-t... the prompt says
instructions=(
Hi!
This example was produced using GPT 3.5 turbo, where yes, the LLM does not always align ideally. I used 3.5 for the example since that's Marvin's default and I know many people wouldn't have gpt4 access yet (which is significantly better at following instructions) - didn't want to set a misleading expectation.
that said, my instructions for the bot in this example certainly could have been more precise :) for a more real example, you could check out the other example (which works pretty well on 3.5) https://github.com/PrefectHQ/marvin/blob/main/examples/load_...
Thanks!
Caching is highly requested! We have an issue open (https://github.com/PrefectHQ/marvin/issues/102) and expect to tackle it soon.
You can set temperature as a setting today (sorry we haven't documented all the settings yet) by setting the env var `MARVIN_OPENAI_MODEL_TEMPERATURE=0.2` or at runtime with `marvin.settings.openai_model_temperature=0.2`. Note the temperature is set when a bot / ai_fn is created, not when it's called, so you need to do this early.
tmate
- FLaNK Stack Weekly 3 April 2023
- ttyd - Share your terminal over the web
- Show HN: Quick tunnels to localhost with one command and no binary download
-
I’d Live Share collaboration impossible?
Checkout https://tmate.io/
-
Zellij - a terminal workspace and multiplexer - releases new version with Sixel support and much more
So I did here and on tmates repo
-
Tmate and Zellij can be the future!
Issue Link
The multiplexing tool is Zellij and the ssh joining service is Tmate.io
-
Termius (YC W19) – Share your terminal session like Google Docs
I was looking for something like this recently, and found tmate [0], which is an open source solution to the terminal-sharing problem.
I have no idea what Termius is, but I have some feedback for you — your landing page takes over 4 seconds (!) to show anything for me on Firefox, even if I have visited the page before. This doesn't seem to be related to loading resources alone, since that happens fairly fast.
(I am not on a 56kbps connection.)
Not that I can think of. I've been a happy tmate user for several years, and the feature set is more than I need. The bad parts have just been reliability at some points over the past few years. In my case, I can only recall having bumped into this[0] one.
-
$ sudo rm -rf / === NPM install
Some of this is solved with locked dependencies and an understanding of how "npm install" actually works, but I find this point really takes away any credibility for this article:
> This is just one of the reasons why I think by 2023 working with ephemeral cloud-based dev environments will be the standard. Just like CI/CD is today.
Over my dead body will I ever use a cloud environment to write code. tmate[0] might be the closest I ever get.
I get some developers don't care about what they throw aimlessly onto a cloud, but I don't know what's being logged, what's being stored, how long it's being stored, who has access to it, what third-parties have access to it, and so on. The business I own or the one I write code for might care if those files were exposed like that.
Use a local VM. Or a development proxy. The cloud isn't a solution for everything.
What are some alternatives?
Sshwifty - Web SSH & Telnet (WebSSH & WebTelnet client) 🔮
vim-dadbod-ui - Simple UI for https://github.com/tpope/vim-dadbod
tty-share - Share your linux or osx terminal over the Internet.
termpair - View and control terminals from your browser with end-to-end encryption 🔒
asciinema - Platform for hosting and sharing terminal session recordings
Neko - A self hosted virtual browser (rabb.it clone) that runs in docker.
instant.nvim - collaborative editing in Neovim using built-in capabilities
CoVim - Collaborative Editing for Vim
emacs-edbi - Database Interface for Emacs Lisp
Tabby - A terminal for a more modern age
vim-smoothie - Smooth scrolling for Vim done right🥤
wishlist - A public catalogue of Lua plugins Neovim users would like to see exist