THELEMA
gpt-3-experiments
THELEMA | gpt-3-experiments | |
---|---|---|
1 | 6 | |
14 | 709 | |
- | - | |
0.0 | 0.0 | |
about 8 years ago | almost 4 years ago | |
Prolog | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
THELEMA
-
The Computers Are Getting Better at Writing
Representing costs in a meaningful manner is a constant problem in every M:tG generator I've seen.
The problems I highlight above are not with grammaticality, which is certainly a big step forward with respect to the past. But many of the abilities still don't make a lot of sense, or don't make sense to be on the same card, or have weird costs etc.
My intuition is that it would take a lot more than language modelling to generate M:tG cards that make enough sense that it's more fun to generate them than create them yourself. I think it would be necessary to have background knowledge of the game, at least its rules, if not some concept of a metagame.
Also, I note that the new online version of the game is capable of parsing cads as scripts in a programming language using a hand-crafted grammar rather than a machine-learned model [4] [5]. So it seems to me that the state-of-the-art for M:tG language modelling is still a hand-crafted grammar.
__________________
[1] https://github.com/stassa/Gleemin - unfortunately, doesn't run anymore after multiple changes to Prolog interepreters used to create and then port the project over.
[2] https://github.com/stassa/THELEMA - should work with older versions of Swi-Prolog, unfortunately not documented in the README.
[3] https://link.springer.com/article/10.1007/s10994-020-05945-w - see Section 3.3 "Experiment 3: M:tG fragment".
[4] https://www.reddit.com/r/magicTCG/comments/74hw1z/magic_aren...
[5] https://www.reddit.com/r/magicTCG/comments/9kxid9/mtgadisper...
gpt-3-experiments
-
AI chatbots are not a replacement for search engines
The problem with ChatGPT as a replacement for Google is that it was not designed to produce accurate facts, and it shows. This model cut its teeth writing articles about the discovery of unicorns in the Andes[0] for goodness sake! It's a language model, and a very impressive one at that, but language is used to express falsehoods and fiction just as regularly as it is used to express truth.
This doesn't mean that it can't produce accurate facts, most of the time it does! But when it does produce nonsense, it does it in exactly the same tone of authority, so if you don't already know the answer you may well walk away believing an AI hallucination.
And the trouble is it doesn't really matter if everyone here thinks "well, I would follow up each request with research to verify the answer", because most people won't! This is like the Google answer extracts, which fairly frequently mislead by extracting out-of-context quotes, except that there's no way to get the original context and there may in fact be no original context! This makes follow-up research much more complicated than with Google and therefore unlikely to happen. If ChatGPT replaces Google, the amount of nonsense on the internet will get even worse, which is something that until 2022 I never thought was possible.
[0] https://github.com/minimaxir/gpt-3-experiments/blob/master/e...
- Artificial Intelligence writes
-
The Computers Are Getting Better at Writing
See also my experiments with GPT-3 on sane prompts, which have wildly varying quality even after generating them in bulk: https://github.com/minimaxir/gpt-3-experiments
Creative writing hasn't been one of the super-hyped use cases by OpenAI for the OpenAI API outside of AI Dungeon, surprisingly. For just random generation, the necessary curation can detract from the time-savings advantages. (as an aside, the API is also extremely expensive for long-form content to the point I'm not sure how the economics work for these startups even with charging monthly fees).
I'm more bullish on small bespoke models for a given use case, which is what I spend my time researching.
-
Does GPT-2 Know Your Phone Number?
Thanks, didn't twig onto the fact that you linked a subtree of the whole repo. Weird that even with the nonzero temp the AskReddit prompt went a bit loopy.
> https://github.com/minimaxir/gpt-3-experiments/blob/master/e...
Oh my goodness that is absurd in the most delightful way. Thanks for sharing that.
What are some alternatives?
vim-LanguageTool - A vim plugin for the LanguageTool grammar checker
languagetool - Style and Grammar Checker for 25+ Languages
chatgpt-google-extension - A browser extension that enhance search engines with ChatGPT
Gleemin - A Magic: the Gathering™ expert system
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
chatgpt-raycast - ChatGPT raycast extension
Lobsters - Computing-focused community centered around link aggregation and discussion
newsboat - An RSS/Atom feed reader for text terminals
feeds