pytest-codspeed
code-interpreter
pytest-codspeed | code-interpreter | |
---|---|---|
6 | 37 | |
100 | 1,974 | |
2.0% | 3.4% | |
8.2 | 9.3 | |
about 1 month ago | 4 days ago | |
Python | MDX | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytest-codspeed
- Show HN: P99.chat – Chat for Performance Measurement
- Show HN: P99.chat – the assistant for software performance optimization
-
Ask HN: Who is hiring? (March 2025)
CodSpeed | Founding AI Engineer | On-site (Paris) / Remote (Europe) | Full-time | https://codspeed.io
We're building software performance optimization tools to optimize and measure code performance before it is deployed to production. We avoid regressions that impact UX and help developers solve their performance issues faster. We're already live and trusted by top-tier open-source project teams such as Pydantic, Ruff, and Prisma.
We’re at an exciting early stage and looking for talented engineers who share our passion for helping to enhance the performance of software used by billions, improving the software development lifecycle, and building tools we love to use ourselves.
Apply at https://codspeed.notion.site/Founding-AI-Engineer-cd1bf4fd73...
- CodSpeed – integrated CI tool for performance testing
-
Pinpoint performance regressions with CI-Integrated differential profiling
pytest-codspeed, plugin for pytest
code-interpreter
-
Open Lovable
I didn't know about https://e2b.dev/ but I was looking for something exactly like that. Does anyone know about any self hostable alternatives?
-
Sandboxing AI - Extending AI Responsibly
Sandboxing through platforms like E2B.dev
-
Show HN: A MCP server to evaluate Python code in WASM VM using RustPython
i work on E2B, we are open-source and providing sandboxes for Perplexity, Manus, Hugging Face among others.
check this out: https://e2b.dev
-
Show HN: Browser‑based MCP Chat – run MCPs without local setup
We built a tiny web app that lets you connect to any MCP server and chat with it right from the browser.
It spins up MCP servers in E2B sandboxes and converts the stdio to SSE. The chat app executes on the client‑side, so your API keys are sent directly and only to their respective services (E2B, model providers) - never to our server.
Since everything is client-side, you need an OpenAI/Anthropic API key and an E2B API key to try it [https://e2b.dev].
Try it here: https://netglade.github.io/mcp-chat/
We made it as a hackathon project and won the 1st price with it, though we're not really sure what would be a meaningful way to move it forward, so we welcome all types of feedback.
-
Show HN: Online Python Compiler with Libraries
You might check out https://e2b.dev, they already have a really robust sandbox system with nice SDKs.
-
Ask HN: Who is hiring? (March 2025)
E2B | Distributed Systems/AI/DevRel Engineers | Full-time | San Francisco/Prague (in-person only) | $150k - $300k + equity (0.1% - 1%) | https://e2b.dev
Hi, I'm Vasek. CEO of E2B (https://e2b.dev). We're building an open-source infrastructure for AI code interpreting/code execution in our Sandboxes.
We have customers like Perplexity, Vercel, Hugging Face, You.com, have revenue, growing, and raised over $11M. We're hiring engineers all over the board to work on our infrastructure, SDKs, AI projects, and user dashboard.
We're a team of 15, the core team is immigrants from Europe that moved to San Francisco. We work from our office in SF and in newly opened office in Prague. Both are in-person. Two technical co-founders that have known each other for 15 years.
We're looking for:
- Distributed systems engineer to work on our infrastructure that's powered by Firecracker and Nomad.
- AI engineer to help us build more opinionated AI codegen tools on top of our sandboxes (think our version of Next.js)
- DevRel engineer to help us inspire more developers by creating mini projects, examples, integrations, and help us with our Discord community.
If this sounds interesting to you, shoot me an email at vasek at e2b.dev. We've found some great people on HN before so looking forward to hearing from you!
-
This Week in Docker: AI, AI, AI!
My friends at E2B are currently hiring for a Distributed Systems Engineer role. Probably a very demanding role but a cracked team and you get to work on open source!
-
Generative AI Powered QnA & Visualization Chatbot
LLM Code Execution: Running LLM-generated code on the same server as the application is not recommended and this can be prevented by running the code on the Sandbox environments such as Modal and E2B
-
Show HN: Open Computer Use
Hi everyone, I'm Vasek (https://x.com/mlejva), the CEO of the company behind this - https://e2b.dev. The company is called E2B. We're an open-source (https://github.com/e2b-dev) devtool that makes it easy to run untrusted AI-generated code in our secure sandboxes. You can think of us as coding runtime for LLMs.
This repo is one of our open-source projects that we're releasing to show developers what they can build with E2B. We used our sandboxes that are powered by AWS's Firecracker and gave them GUI with Linux. At the same time we made it easy to control this cloud computer with our Desktop SDK (https://github.com/e2b-dev/desktop). Essentially building a virtual desktop computer for AI and we gave it to LLMs to control it.
-
Show HN: AgentScript AI – Build Agents that think in code
> do not even need to know all the data to perform operations on it or make decisions
If I know code-generation is going to be possible without any contextual information, I might as well generate the code using copilot or Curor and commit my code. Why do I need a runtime agent to do it?
What if the control flow has to change based on a result it receives?
What if the plan up front is wrong and needs to change halfway? Do I run the entire thing again with a new plan? What if my tools are not idempotent?
What if it generates a recursive loop?
Also, if I really want to do this, and if my tools are safe, why don't I just do a raw Open AI / Claude call and get deno subhosting [1] or E2B [2] to run it?
[1] https://deno.com/subhosting
[2] https://e2b.dev/
What are some alternatives?
pytest-benchmark - pytest fixture for benchmarking code
vortex - An extensible, state of the art columnar file format. Formerly at @spiraldb, now a Linux Foundation project.
less_slow.py - Playing around "Less Slow" coding practices in Python, from numerical micro-kernels to coroutines, ranges, and polymorphic state machines
e2b-cookbook - Examples of using E2B
pyperf - Toolkit to run Python benchmarks
E2B - Open-source, secure environment with real-world tools for enterprise-grade agents.