workers-oauth-provider
codex

workers-oauth-provider | codex | |
---|---|---|
20 | 27 | |
1,478 | 30,279 | |
2.5% | 9.8% | |
9.0 | 9.8 | |
27 days ago | 7 days ago | |
TypeScript | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
workers-oauth-provider
-
Everything around LLMs is still magical and wishful thinking
> So it kinda worked, but I would not use that for anything "mission critical" (whatever this means).
It means projects like Cloudflare's new OAuth provider library. https://github.com/cloudflare/workers-oauth-provider
> This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.
-
(Experiment) Colocating agent instructions with eng docs
I get that a lot of folks wouldn't want to keep a log, but it makes me so sad that the wonderful aider 'ai peer' recommends adding aider logs of all sorts to the gitignore on startup. This feels bad for humans, and bad for AI sense-making too. If you are having this dialog, of course you'd want to be able to reflect on that, I'd think.
It'd be neat to go further. Keeping the agent instructions alongside engineering docs feels like it makes sense. It'd be neat to see what one could do with Backstage like integration, to build out this existing wonderful corporate knowledge-base.
Are there MCP servers yet that can reflect on chat history? Now I want to see a Backstage MCP server even more, one that's extensible by the many Backstage plugins!
Shout out to Kenton Varda & cloudflare doing a nice job making a good commit history of AI use on this project where Kenton was testing the waters. I'm not sure what other good write ups we have for enshrining & promoting the agent instructions as good reference material. https://github.com/cloudflare/workers-oauth-provider/ https://news.ycombinator.com/item?id=44159166
-
Writing Code Was Never the Bottleneck
To be fair, there was a pretty dumb CVE (which had already been found and fixed by the time the project made the rounds on HN):
https://github.com/cloudflare/workers-oauth-provider/securit...
You can certainly make the argument that this demonstrates risks of AI.
But I kind of feel like the same bug could very easily have been made by a human coder too, and this is why we have code reviews and security reviews. This exact bug was actually on my list of things to check for in review, I even feel like I remember checking for it, and yet, evidently, I did not, which is pretty embarrassing for me.
-
QEMU: Define policy forbidding use of AI code generators
We'll have to see how it pans out for Cloudflare. They published an oauth thing and all the prompts used to create it.
https://github.com/cloudflare/workers-oauth-provider/
-
Agentic Coding Recommendations
There's many examples of exactly what you're asking for, such as Kenton Varda's Cloudlfare oauth provider [1] and Simon Willison's tools [2]. I see a new blog post like this with detailed explanations of what they did pretty frequently, like Steve Klabnik's recent post [3], which while it isn't as detailed has a lot of very concrete facts. There's even more blog posts from prominent devs like antirez who talk about other things they're doing with AI like rubber ducking [4], if you're curious about how some people who say "I used Sonnet last week and it was great" are working, because not everyone uses it to write code - I personally don't because I care a lot about code style.
[1]: https://github.com/cloudflare/workers-oauth-provider/
[2]: https://tools.simonwillison.net/
[3]: https://steveklabnik.com/writing/a-tale-of-two-claudes/
[4]: https://antirez.com/news/153
-
A look at Cloudflare's AI-coded OAuth library
> A very good piece that clearly illustrates one of the dangers with LLS's: responsibility for code quality is blindly offloaded on the automatic system
It does not illustrate that at all.
> Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards.
> To emphasize, *this is not "vibe coded"*. Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.
— https://github.com/cloudflare/workers-oauth-provider
The humans who worked on it very, very clearly took responsibility for code quality. That they didn’t get it 100% right does not mean that they “blindly offloaded responsibility”.
Perhaps you can level that accusation at other people doing different things, but Cloudflare explicitly placed the responsibility for this on the humans.
-
I think I'm done thinking about GenAI for now
The author goes into great detail about how he looked at my commit log[0] where I used AI, and he found it "nauseating" and concluded he'd never want to work that way.
I'm certainly not going to tell anyone that they're wrong if they try AI and don't like it! But this guy... did not try it? He looked at a commit log, tried to imagine what my experience was like, and then decided he didn't like that? And then he wrote about it?
Folks, it's really not that hard to actually try it. There is no learning curve. You just run the terminal app in your repo and you ask it to do things. Please, I beg you, before you go write walls of text about how much you hate the thing, actually try it, so that you actually have some idea what you're talking about.
Six months ago, I myself imagined that I would hate AI-assisted coding! Then I tried it. I found out a lot of things that surprised me, and it turns out I don't hate it as much as I thought.
[0] https://github.com/cloudflare/workers-oauth-provider/commits... (link to oldest commits so you can browse in order; newer commits are not as interesting)
-
My AI Skeptic Friends Are All Nuts
What exactly do you want to see put up?
I ask this because it reads like you have a specific challenge in mind when it comes to generative AI and it sounds like anything short of "proof of the unlimited powers" will fall short.
It's almost as if you've set the criteria find LLMs being useful to be proof of unlimited powers.
Here's the deal: Reasonable people aren't claiming this stuff is a panacea. It's useful when used by people who understand its limitations.
If you want to see how it's been used by someone who was happy with the results, and is willing to share their results, you can scroll down a few stories on the front-page and check the commit history of this project:
https://github.com/cloudflare/workers-oauth-provider/commits...
Now here's the deal: These people aren't trying to prove anything to you. They're just sharing the results of an experiment where a very talented developer used these tools to build something useful.
So let me ask you this: Did they put up? Or is it not magical enough for you to deem it useful?
-
Cloudlflare builds OAuth with Claude and publishes all the prompts
> did he save any time though
Yes:
> It took me a few days to build the library with AI.
> I estimate it would have taken a few weeks, maybe months to write by hand.
– https://news.ycombinator.com/item?id=44160208
> or just tried to prove a point that if you actually already know all details of impl you can guide llm to do it?
No:
> I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.
— https://github.com/cloudflare/workers-oauth-provider/?tab=re...
codex
-
OpenAI Is Ditching TypeScript to Rebuild Codex CLI with Rust
you can read https://github.com/openai/codex/discussions/1174 directly
-
How do LLMs and AI coding tools solve new problems when Stack Overflow is dead?
stack overflow was declining for years, long before chatgpt. Clear in that graph.
https://www.theregister.com/2019/10/01/stack_exchange_contro...
It's completely within their right to eject people based on political opinions.
I dont understand why a gender neutral website of anonymous people talking about gender neutral tech has any reason to inject gender or even be concerned about it.
According the graph, 2017 peak, but when chatgpt launched, they were already 50% declined. After chatgpt, it increased the rate of decline but hardly was causal.
>Reddit and other forums
Reddit is declining just like Stack; for the same reasons.
>A new product similar to Stack Overflow but tailored to the AI era with better user experience might emerge to bridge the gaps. Developers could ask questions in a centralized place and LLM providers could access the difficult coding questions
The one thing I recommended to github.
Lets find a random issue for example:
https://github.com/openai/codex/issues/1330
Why couldnt various AI have commented some answer? Perhaps co-pilot could read it, produce a fully tested agentic patch that maintainers could look at?
- Will OpenAI Train on You Data with Codex CLI and Custom Provider?
-
OpenAI Codex as a native agent in your TypeScript (Node.js) app
OpenAI Codex CLI is an open‑source command‑line tool that brings the power of our latest reasoning models directly to your terminal. It acts as a lightweight coding agent that can read, modify, and run code on your local machine to help you build features faster, squash bugs, and understand unfamiliar code. Because the CLI runs locally, your source code never leaves your environment unless you choose to share it.
-
In 2025, how easy is it for a developer to "sandbox" a program?
The situation on macOS is so frustrating. sandbox-exec / seatbelt has been marked as deprecated for nearly a decade now (since macOS Sierra in 2016) but it's still what everyone uses - here's OpenAI using it for their new Codex CLI: https://github.com/openai/codex/issues/215
Maybe the new "containers" stuff in macOS 26 is going to be a good replacement for that? It seems like that's a different solution though.
All I want is an easy, documented, supported way to run a binary on my computer and say "it can only access these files, use this much RAM and it's not allowed to make any outbound network requests". It always surprises me how hard this is!
-
Beyond the Hype: A Look at 5+ AI Coding Agents for Your Terminal
Alright, let's kick things off with the heavyweights. OpenAI and Anthropic, the multi-billion dollar giants, are throwing serious manpower and cash at these coding assistants. If you're not a hardcore terminal nerd and just want something that works and boosts your productivity ASAP, these are your first stop. OpenAI's gone open-source with Codex CLI, while Anthropic's keeping Claude Code under wraps. The community's been vibing more with Claude Code lately, but don't count Codex CLI out – that open-source muscle could mean big things as the community jumps in.
-
Claude Code Is My Computer
You can use openai's codex, or any of the claude code clone's available on github.
At the end of the day all do the same. a cli tool that can call tools and use them as needed.
https://github.com/openai/codex
-
Cloudlflare builds OAuth with Claude and publishes all the prompts
> I think we are in desperate need of safe vibe coding environments where code runs in a sandbox with security policies that make it impossible to screw up.
OpenAI's new Rust version of Codex might be of interest, haven't dived deeper into the codebase but seems they're thinking about sandboxing from the get-go: https://github.com/openai/codex/blob/7896b1089dbf702dd079299...
-
Codex CLI is going native
For all this "AI is the future," how in the world does OpenAI's own codebase still have "TODO" comments for the most trivial thing I can possibly imagine? <https://github.com/openai/codex/blob/rust-v0.0.2505302325/co...> made extra wtf by a comment at the top of the file laying out that requirement, so no "?" required <https://github.com/openai/codex/blob/rust-v0.0.2505302325/co...>
I would bet it took more wall-clock time to type out that comment than it would have for any number of AI agents to snap the required equivalent of `if not re.match(...): continue`
- Jules: An Asynchronous Coding Agent
What are some alternatives?
windsurf.vim - Free, ultrafast Copilot alternative for Vim and Neovim
daemonite - The open-source Bismuth CLI
gopool - GoPool is a high-performance, feature-rich, and easy-to-use worker pool library for Golang.
aider - aider is AI pair programming in your terminal
mpac-ui-improved
CodeSage - CodeSage uses AI to generate detailed comments and documentation for codebase, making it easier to understand and maintain. With support for Golang programming languages and an in-memory vector database, It ensures fast and efficient codebase indexing and search functionality.
