shell_gpt
zsh-bench
shell_gpt | zsh-bench | |
---|---|---|
38 | 24 | |
8,341 | 499 | |
- | - | |
8.0 | 4.1 | |
7 days ago | 6 months ago | |
Python | Shell | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
shell_gpt
-
Oh My Zsh
https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#shell...
-
Is there a better way to feed my codebase to GPT than using this bash script? How could I bundle the source code more intelligently?
I would like to stay in the terminal, and am using https://github.com/TheR1D/shell_gpt, my format is simply to send GPT a file to discuss:
-
Ask HN: How are you using LLMs in your command-line?
ShellGPT https://github.com/TheR1D/shell_gpt does pretty well for a lot of use cases. I mostly use it in REPL mode, switching topics as needed. I have wrappers around the `sgpt` command to, say, start a REPL with a particular topic, say, Python, which loads my previous history on that topic as part of the prompt.
I also have an alias to save existing chats as text files so I can go back and review history.
Finally, there is an alias to load a question up in an editor if I need to enter multiline text, e.g. to discuss code fragments, etc.
I expect command-line workflows to be pretty individualized and I'm curious what others do. For me (old programmer), using a command line REPL feels much more natural (and blissfully noise-free) than going to a Web page to talk to, say, ChatGPT.
-
ChatGPT web and mobile UIs unavailable
The API still works. I've been using https://github.com/TheR1D/shell_gpt/ in my workflow.
-
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs
I recommend shell-gpt[1] for anyone with access to the OpenAI API. It works surprisingly well considering how simple it is. Be sure to browse the examples in the README.
[1] https://github.com/TheR1D/shell_gpt
-
englizsh: Zsh plugin to interface command-line GPT programs intuitively through keybindings
Nice idea. You may want to mention in the docs that it requires https://github.com/TheR1D/shell_gpt ;)
-
Sideloaded my app on an old second hand smartwarch, opened a shell and talked to AI on my wrist
I honestly bought the device just to play with the wearable version of my android app: a voice controlled ssh client that let me also pass voice input to shell_gpt. Other pre installed apps won't really work on this device, including google assistant, so I had to switch to google cloud speech to text APIs for speech recognition. I can still sideload compatible apps via bluetooth debugging tho
-
Can I integrate my local LLM to enable it to run system commands and execute local code?
Maybe this can help : https://github.com/TheR1D/shell_gpt
-
CLI to convert natural language to terminal commands
I'm not really following these but they tend to get released every other day, like https://github.com/nlml/YoCLI/ and https://github.com/TheR1D/shell_gpt
- LLM, ttok and strip-tags–CLI tools for working with ChatGPT and other LLMs
zsh-bench
-
Oh My Zsh
Someone's made a benchmarking system for zsh: https://github.com/romkatv/zsh-bench#premade-configs
Of course, their config is the best according to the benchmark (and ohmyzsh is the slowest option), but DIY configs are also covered, particularly possible performance optimizations.
-
Faster Shell Startup with Shell Switching
Unfortunately, running exit is not a great strategy for running benchmarks. For zsh specifically, plugin managers are optimized for fast exit.
romkatv did a great write-up and benchmark within the context of zsh[0]. It's a great read.
[0] https://github.com/romkatv/zsh-bench#how-not-to-benchmark
- Dynamic Aliases and Functions in Zsh
- Benchmark for interactive zsh – plugins, frameworks and plugin managers
- zsh-smartcache: another evalcache but can update the cache
-
Announcing Spaceship v4.0 — a customizable Zsh prompt with asynchronous rendering
Given the addition of async rendering in the latest release of spaceship, I wasn't sure whether I should include performance in the list of features found in powerlevel10kbut but not in spaceship. I used zsh-bench to benchmark powerlevel10k on my laptop running on battery (I'm writing this on a train) with a config that makes powerlevel10k looks similar to spaceship. I simply ran p10k configure and chose what looked most similar: Lean Style, UNICODE, 256 colors, two lines, etc. Here are the benchmark results:
-
7x slowdown when modify $fpath and add completion script
Obligatory link since you are engaging in profiling interactive zsh: https://github.com/romkatv/zsh-bench.
-
What is the best plugin manager in your opinion?
1.) It's fast. Like, really fast. 1.) It supports deferred loading via zsh-defer 1.) It supports local plugins as well as ones hosted via a git provider (aka: GitHub, GitLab, BitBucket, etc) 1.) The codebase is simple and easy to understand and contribute to 1.) It supports git branches (with tag/shas on the roadmap) 1.) It supports partial plugin loading such as loading Oh-My-Zsh plugins and Prezto modules without loading the whole framework. 1.) There's an easy migration path from legacy plugin managers like Antigen/Antibody. 1.) Plugins are managed via a simple plugins file that makes it easy to share your config with others. 1.) And lots more
-
Zsh significantly faster when sourced from bash with bash as default shell
In any case, slow zsh startup is always caused by whatever you put in zsh startup files and it's always possible to reduce zsh startup to imperceptible levels without sacrificing any functionality by editing said startup files. There is a bit of info on interactive zsh performance at https://github.com/romkatv/zsh-bench.
-
Zpy is a simple zsh plugin manager written in python that don't add to the shell startup time.what to y'all think?
Why is this a good thing? Is this a proxy for performance? If so, you can measure performance directly with zsh-bench. This way you can describe the advantage in terms that have real value to end users. For example, you can say that the first prompt appears N milliseconds faster when using Zpy than if you were using something-else.
What are some alternatives?
gorilla-cli - LLMs for your CLI
fisher - A plugin manager for Fish
ai-shell - A CLI that converts natural language to shell commands.
zinit - 🌻 Flexible and fast ZSH plugin manager
GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
sheldon - :bowtie: Fast, configurable, shell plugin manager
YoCLI - yo lets you find the the CLI command you are looking for by asking in natural language
powerlevel10k - A Zsh theme
butterfish - A shell with AI superpowers
zsh4humans - A turnkey configuration for Zsh
mods - AI on the command line
oh-my-fish - The Fish Shell Framework