opentofu
awesome-ai-safety
opentofu | awesome-ai-safety | |
---|---|---|
41 | 5 | |
20,847 | 138 | |
7.9% | 8.0% | |
9.8 | 5.6 | |
about 19 hours ago | 7 months ago | |
Go | ||
Mozilla Public License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentofu
-
OpenTofu v1.7: Enhanced Security with State File Encryption
and more.
-
OpenTofu 1.7.0 is out with State Encryption, Dynamic Provider-defined Functions
Hey!
> With OpenTofu exclusive features making such an early debut, is the intention to remain a superset of upstream Terraform functionality and spec, or allow OpenTofu to diverge and move in its own direction?
The intention is to let it diverge. There will surely be some amount of shared new features, but we're generally going our own way.
> Will you aim to stick to compatibility with Terraform providers/modules?
Yes.
Regarding providers, we might introduce some kind of superset protocol for providers at some point, for tofu-exclusive functionality, but we'll make sure to design it in a way where providers keep working with both Terraform and OpenTofu.
Regarding modules, this one will be more tricky, as there might Terraform languages features that aren't supported in OpenTofu and vice-versa. We have a proposal[0] to tackle this, and enable module authors to easily create modules with support for both, even when using some exclusive features of any one of them.
> Is the potential impact of community fragmentation on your mind as many commercial users who donβt care about open source ideology stick to the tried-and-true Hashicorp Terraform?
We've talked to a lot of people, and we've met many who see the license changes as a risk for them, while OpenTofu, with its open-source nature, is the less-risky choice. That includes large enterprises.
> Is there any intention to try and supplement the tooling around the core product to provide an answer to features like Terraform Cloud dashboard, sentinel policies and other things companies may want out of the product outside of the command line tool itself?
That's mostly covered by the companies sponsoring OpenTofu's development: Spacelift (I work here), env0, Scalr, Harness, Gruntworks.
[0]: https://github.com/opentofu/opentofu/issues/1328
- IBM to Acquire HashiCorp, Inc
-
IBM Planning to Acquire HashiCorp
Please remember to file in a calm and orderly fashion toward the exits and remember: IBM killed Centos for profit.
Terraform users can pick up their new alternative here:
https://opentofu.org/
and for those of you with Vault, you can find your new alternative here:
https://openbao.org/
-
Grant Kubernetes Pods Access to AWS Services Using OpenID Connect
OpenTofu v1.6
-
Terraform vs. AWS CloudFormation
Note: New versions of Terraform will be placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that will expand on Terraform's existing concepts and offerings. It is a viable alternative to HashiCorp's Terraform, being forked from Terraform version 1.5.6. OpenTofu retained all the features and functionalities that had made Terraform popular among developers while also introducing improvements and enhancements. OpenTofu is not going to have its own providers and modules, but it is going to use its own registry for them.
-
Why CISA Is Warning CISOs About a Breach at Sisense
opentofu is solving this with proper state encryption support: https://github.com/opentofu/opentofu/issues/874
- OpenTofu Response to HashiCorp's Cease and Desist Letter
- Ask HN: What's better Terraform or AWS CDK?
-
OpenTofu: The Open Source Terraform Alternative
As with all other Linux Foundation and CNCF projects, OpenTofu is guided by the Technical Steering Committee(TSC), which works in open collaboration with the community on the development of new features, upgrades, bug fixes, etc. The current TSC consists of representatives from Harness, Spacelift, Scalr, Gruntworks, and env0.
awesome-ai-safety
-
Ask HN: Who is hiring? (October 2023)
Giskard - Testing framework for ML models| Multiple roles | Full-time | France | https://giskard.ai/
We are building the first collaborative & open-source Quality Assurance platform for all ML models - including Large Language Models.
Founded in 2021 in Paris by ex-Dataiku engineers, we are an emerging player in the fast-growing market of AI Quality & Safety.
Giskard helps Data Scientists & ML Engineering teams collaborate to evaluate, test & monitor AI models. We help organizations increase the efficiency of their AI development workflow, eliminate risks of AI biases and ensure robust, reliable & ethical AI models. Our open-source platform is used by dozens of ML teams across industries, both at enterprise companies & startups.
In 2022, we raised our first round of 1.5 million euros, led by Elaia, with participation from Bessemer and notable angel investors including the CTO of Hugging Face. To read more about this fundraising and how it will accelerate our growth, you can read this announcement: https://www.giskard.ai/knowledge/news-fundraising-2022
In 2023, we received a strategic investment from the European Commission to build a SaaS platform to automate compliance with the upcoming EU AI regulation. You can read more here: https://www.giskard.ai/knowledge/1-000-github-stars-3meu-and...
We are assembling a team of champions: Software Engineers, Machine Learning researchers, and Data Scientists ; to build our AI Quality platform and expand it to new types of AI models and industries. We have a culture of continuous learning & quality, and we help each other achieve high standards & goals!
We aim to grow from 15 to 25 people in the next 12 months. We're hiring the following roles:
-
Ask HN: Who is hiring? (August 2023)
Giskard - Testing framework for ML models| Multiple roles | Full-time | France | https://giskard.ai/
We are building the first collaborative & open-source Quality Assurance platform for all ML models - including Large Language Models.
Founded in 2021 in Paris by ex-Dataiku engineers, we are an emerging player in the fast-growing market of AI Safety & Security.
Giskard helps Data Scientists & ML Engineering teams collaborate to evaluate, test & monitor AI models. We help organizations increase the efficiency of their AI development workflow, eliminate risks of AI biases and ensure robust, reliable & ethical AI models. Our open-source platform is used by dozens of ML teams across industries, both at enterprise companies & startups.
In 2022, we raised our first round of 1.5 million euros, led by Elaia, with participation from Bessemer and notable angel investors including the CTO of Hugging Face. To read more about this fundraising and how it will accelerate our growth, you can read this announcement: https://www.giskard.ai/knowledge/news-fundraising-2022
In 2023, we received a strategic investment from the European Commission to build a SaaS platform to automate compliance with the upcoming EU AI regulation. You can read more here: https://www.giskard.ai/knowledge/1-000-github-stars-3meu-and...
We are assembling a team of champions: Software Engineers, Machine Learning researchers, and Data Scientists ; to build our AI Quality platform and expand it to new types of AI models and industries. We have a culture of continuous learning & quality, and we help each other achieve high standards & goals!
We aim to grow from 15 to 25 people in the next 12 months. We're hiring the following roles:
* Software Engineer - https://apply.workable.com/giskard/j/AD2C90B581/ (Python, Java, Typescript, Vue.js, Cloud skills)
* Machine Learning Researcher - https://apply.workable.com/giskard/j/E89FE8E310/ (post-PhD)
* Data Science lead - https://apply.workable.com/giskard/j/E89FE8E310/ (ML + consulting experience required)
* Growth marketing intern - https://apply.workable.com/giskard/j/C8635E9B0C/
* Data Science intern - https://apply.workable.com/giskard/j/7F0B341852/
-
Show HN: Python library to scan ML models for vulnerabilities
Hi! Iβve been working on this automatic scanner for ML models to detect issues like underperforming data slices, overconfidence in predictions, robustness problems, and others. It supports all main Python ML frameworks (sklearn, torch, xgboost, β¦) and integrates with the quality assurance solution we are building at Giskard AI (https://giskard.ai) to systematically test models before putting them in production.
It is still a beta and I would love to hear your feedback if you have the time to try it out.
We have quite a few tutorials in the docs with ready-made colab notebooks to make it easy to get started.
If you are interested in the code:
https://github.com/Giskard-AI/giskard/tree/main/python-clien...
-
[R] Awesome AI Safety β A curated list of papers & technical articles on AI Quality & Safety
Repository: https://github.com/Giskard-AI/awesome-ai-safety
- AI Safety β curated papers for safer, ethical, and reliable AI
What are some alternatives?
datadog-static-analyzer - Datadog Static Analyzer
tabby - Self-hosted AI coding assistant
adoptium
awesome-langchain - π Awesome list of tools and projects with the awesome LangChain framework
hnrss - Custom, realtime RSS feeds for Hacker News
giskard - π’ Open-Source Evaluation & Testing framework for LLMs and ML models
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
Cap'n Proto - Cap'n Proto serialization/RPC system - core tools and C++ library
nl-wallet - NL Public Reference Wallet
langchain - π¦π Build context-aware reasoning applications
mentat - Mentat - The AI Coding Assistant