peerreview VS matano

Compare peerreview vs matano and see what are their differences.

peerreview

A diamond open access (free to access, free to publish), open source scientific and academic publishing platform. (by danielBingham)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
peerreview matano
7 38
51 1,359
- 1.3%
8.8 7.0
17 days ago 3 months ago
JavaScript Rust
GNU Affero General Public License v3.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

peerreview

Posts with mentions or reviews of peerreview. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-28.
  • Request for Feedback: An open-source, open-access, community governed academic publishing platform that crowdsources review using reputation
    2 projects | /r/AskAcademia | 28 Jun 2023
    Hey everyone, I'm an experienced software engineer from an academic family. I've been aware of the problems in academic publishing for most of my life, and for the last several years I've been running headlong into the paywalls as I work on municipal policy advocacy. I've been pondering software solutions to this problem for a long time. This is exactly the sort of problem internet based software is, in theory, best suited to solving: sharing and discussing information. It should be possible to build a web platform that allows academia to share work, collect feedback, organize review that maintains quality, and find relevant papers with out relying on private, for-profit journal publishers. It should be possible to build and run a web platform that handles all of academic publishing for 1% of the current cost of for-profit publishing or less - which would (in theory) allow the universities to keep it funded while allowing it to be free to publish and free to access. Hell, it could probably be run lean enough that individual academics could fund it through small dollar donations. There's really no good reason to allow the private publishers to charge academia $11 billion a year while keeping 80% of the work locked behind paywalls. I've had several ideas for how to approach the problem, and I spent the last year building out a beta of one of them as a side project. Software development is experimental and iterative. It only works when the developers are able to get active feedback from the people most effected by the problems they are trying to solve. So I'm reaching out for feedback on the beta, and on possible paths forward. The web platform that I've built enables crowdsourced peer review of academic papers. It uses a reputation system (similar to StackExchange) and ties reputation to a field/concept tagging system. Submitted papers must be tagged with 1 - n fields, and only peers who have passed a reputation threshold in one of the tagged fields may offer review. Review is also split into two phases: pre-publish and post-publish. Pre-publish review is author driven. It's focused on collaborative, constructive feedback and uses an interface heavily inspired by both Github Pull Requests and Google Docs. Post-publish review is much closer to traditional review, and is focused on maintaining the integrity of the literature by filtering out spam, misinformation, fraud, and poorly done work. Reputation is mostly gained and lost through voting that happens during post-publish review. Reputation can also be gained by offering particularly constructive pre-publish reviews. All reviews are open and published alongside the papers. Post-publish review is on-going. That's iteration one. As much as I believe review could be crowdsourced, it seems pretty clear that going straight from what we have to this platform would be a huge leap. So I have ideas for how to build a journal overlay on top of the crowdsourced review system that would allow editors to manage teams of reviewers and run their journals through the platform. This would allow them to take advantage of the review interface, and would still give authors the benefit of being able to have a conversation with their reviewers. Authors would then be able to choose to submit their papers to one or more journals, crowdsourced review, or both. Building that out is the next project. Right now I'm working on this as a side project and an experiment -- could a web platform like this work? Would people even use it? If the answer turns out be yes, I'd love for it to become a non-profit, multi-stakeholder cooperative. Essentially independent public infrastructure similar to Wikipedia, only more transparent and more clearly democratically governed. I would love feedback on all aspects of this project - both the current crowdsourcing iteration and the thought to build a generic, open platform for diamond open access journals to run their operations through. Could you ever see yourself using something like this to publish? What about to collect pre-print review? Could you see yourself reviewing through it? What about submitting to journals through it? Are there other approaches to building a web platform that might work better? Am I barking up the wrong tree? Should I press forward, abandon, or is there a better tree? You can find the beta platform here: https://peer-review.io The source here: https://github.com/danielbingham/peerreview And more details about exactly how it works (in its current iteration) here: https://peer-review.io/about Maintaining an open roadmap here: https://github.com/users/danielBingham/projects/6/views/1
  • Show HN: Scientific publishing platform to crowdsource review using reputation
    2 projects | news.ycombinator.com | 28 Jun 2023
  • Millions of dollars in time wasted making papers fit journal guidelines
    5 projects | news.ycombinator.com | 8 Jun 2023
  • Request for Feedback: Peer Review - Open Source, Open Access Scientific Publishing Platform drawing on Github and StackExchange
    2 projects | /r/Open_Science | 5 Jun 2023
    And the source code here: https://github.com/danielbingham/peerreview
  • Open-Source Science (OSSci) to launch interest group on reproducible science
    1 project | /r/Open_Science | 5 Jun 2023
    Last summer I finally saved up enough runway to take some time off work and put a significant amount of time into building an MVP beta of it ( https://peer-review.io, https://github.com/danielbingham/peerreview ). I've been trying to find folks interested in trying it out and exploring whether it could work.
  • Show HN: Peer Review Beta – A universal preprint+ platform
    1 project | news.ycombinator.com | 25 Apr 2023
    Hey HN,

    I've been working on Peer Review for the past year. It's still in early beta (pre-0.1) but I'm looking for some early adopters to start putting it through its paces and help highlight areas I should focus on.

    Peer Review is an idea I've had for years. You're probably well aware of the problems involved in academic, scientific, and scholarly publishing - HN certainly discusses them enough. Peer Review is my attempt to solve them (or a subset of them).

    Peer Review combines features of Github and StackExchange to allow scholarly review to be crowd sourced to a trusted pool of peers. It does this by tying reputation to a hierarchical field tagging system. Reputation gained in children is also gained in the parents. Authors tag their papers with any fields they feel are relevant.

    This means authors can tag their papers with fields higher up the hierarchy to cast a wider review net, or go lower down the hierarchy to cast a narrower one. It also enables cross-discipline review and collaboration very easily - authors simply tag their papers with the fields of both disciplines.

    The review interface combines aspects of Github PRs and Google docs.

    Review is split into two phases: pre-publish "review" focused on giving authors constructive critical feedback to help the improve their work and post-publish "refereeing" which looks more like traditional peer review and is the primary mechanism through which new authors gain reputation.

    The whole site is built around the idea that scholars are working to collectively build the body of human knowledge and make it the best they can.

    You can see the production site here: https://peer-review.io

    You're welcome to explore the staging site and treat it as a sandbox, if you'd like: https://staging.peer-review.io

    It's open source: https://github.com/danielbingham/peerreview

    I'm doing all the development in the open as much as possible. If it gains traction, the plan is to form a non-profit around it and explore whether a web platform can be governed democratically as a multi-stakeholder cooperative and if we can solve some of the issues around large centralized platforms through that governance approach.

  • Ask HN: What interesting problems are you working on? ( 2022 Edition)
    29 projects | news.ycombinator.com | 16 Sep 2022
    I'm working open source and would welcome contributions! (https://github.com/danielbingham/peerreview)

    (Although, the first contribution would probably need to be getting the local working again in a new context... I've been going fast and taking on some techdebt that will need to be paid down soon.)

matano

Posts with mentions or reviews of matano. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-21.
  • Cisco Acquires Splunk
    5 projects | news.ycombinator.com | 21 Sep 2023
    sorry thats https://matano.dev
  • Using rust for DE activities?
    2 projects | /r/dataengineering | 26 Jun 2023
  • Kali Linux 2023.1 introduces 'Purple' distro for defensive security
    3 projects | /r/netsec | 14 Mar 2023
    Matano is very promising, and it supports SQL for queries. I suspect they are going to eat Panther's lunch soon.
  • Looking to centralize storage of logs from cisco, linux, windows, aws....
    1 project | /r/cybersecurity | 28 Feb 2023
    If you aren't planning to query these logs, but just need a place to put them, then look at something like S3. If you have the skills to write SQL, or Python, then look at matano.dev as a data lake solution because you could still query these logs if you wanted.
  • A Software as a Service (SaaS) log collection framework
    2 projects | news.ycombinator.com | 18 Feb 2023
    This is nice! In Matano, we take a similar approach but with Rust + serverless for pulling SaaS logs (https://github.com/matanolabs/matano/tree/main/lib/rust/log_...) and storing them in a data lake.
  • I just added 10 new AWS log sources to our open source project for security logs
    1 project | /r/aws | 1 Feb 2023
    Hi guys, I'm the maintainer of the Matano open source project. Matano is an open source SIEM alternative that lets you ingest and analyze petabytes of security logs in a security data lake in your AWS account.
  • Launch HN: Matano (YC W23) – Open-Source Security Lake Platform (SIEM) for AWS
    2 projects | news.ycombinator.com | 24 Jan 2023
    Hi HN! We’re Shaeq and Samrose, co-founders of Matano (https://matano.dev). Matano is a high-scale, low-cost alternative to traditional SIEM (e.g. Splunk, Elastic) built around a vendor-agnostic security data lake that deploys to your AWS account.

    Don’t worry — we’ll explain all this jargon in a second.

    SIEM stands for “Security Information and Event Management” and refers to log management tools used by security teams to detect threats from an organization's security logs (network, host, cloud, SaaS audit logs, etc.) and send alerts about them. Security engineers write detection rules inside the SIEM as queries to detect suspicious activity and create alerts. For example, a security engineer could write a detection rule that checks the fields in each CloudTrail log and creates an alert whenever an S3 bucket is modified with public access, to prevent data exfiltration.

    Traditional SIEM tools (e.g. Splunk, Elastic) used to analyze security data are difficult to manage for security teams on the cloud. Most don’t scale because they are built on top of a NoSQL database or search engine like Elasticsearch. And they are expensive — the enterprise SIEM vendors have costly ingest-based licenses. Since security data from SaaS and cloud environments can exceed hundreds of terabytes, teams are left with unsatisfactory options: either not collect some data, leave some data unprocessed, pay exorbitant fees to an enterprise vendor, or build their own large-scale solution for data storage (aka “data lake”).

    Companies like Apple, HSBC, and Brex take the latter approach: they build their own security data lakes to analyze their security data without breaking the bank. “Data lake” is jargon for heterogeneous data that is too large to be kept in a standard database and is analyzed directly from object storage like S3. A “security data lake” is a repository of security logs parsed and normalized into a common structure and stored in object storage for cost-effective analysis. Building your own data lake is a fine option if you’re big enough to justify the cost — but most companies can’t afford it.

    Then there’s the vendor lock-in issue. SIEM vendors store data in proprietary formats that make it difficult to use outside of their ecosystem. Even with "next-gen" products that leverage data lake technology, it's nearly impossible to swap out your data analytics stack or migrate your security data to another tool because of a tight coupling of systems designed to keep you locked in.

    Security programs also suffer because of poor data quality. Most SIEMs today are built as search engines or databases that query unstructured/semi-structured logs. This requires you to heavily index data upfront which is inefficient, expensive and makes it hard to analyze months of data. Writing detection rules requires analysts to use vendor-specific DSLs that lack the flexibility to model complex attacker behaviors. Without structured and normalized data, it is difficult to correlate across data sources and build effective rules that don’t create many false positive alerts.

    While the cybersecurity industry has been stuck dealing with these legacy architectures, the data analytics industry has seen a ton of innovation through open-source initiatives such as Apache Iceberg, Parquet, and Arrow, delivering massive cost savings and performance breakthroughs.

    We encountered this problem when building out petabyte-scale data platforms at Amazon and Duo Security. We realized that most security teams don't have the resources to build a security data lake in-house or take advantage of modern analytics tools, so they’re stuck with legacy SIEM tools that predate the cloud.

    We quit our jobs at AWS and started Matano to close the gap between these two worlds by building an OSS platform that helps security teams leverage the modern data stack (e.g. Spark, Athena, Snowflake) and efficiently analyze security data from all the disparate sources across an organization.

    Matano lets you ingest petabytes of security and log data from various sources, store and query them in an open data lake, and create Python detections as code for realtime alerting.

    Matano works by normalizing unstructured security logs into a structured realtime data lake in your AWS account. All data is stored in optimized Parquet files in S3 object storage for cost-effective retention and analysis at petabyte scale. To prevent vendor lock-in, Matano uses Apache Iceberg, a new open table format that lets you bring your own analytics stack (Athena, Snowflake, Spark, etc.) and query your data from different tools without having to copy any data. By normalizing fields according to the Elastic Common Schema (ECS), we help you easily search for indicators across your data lake, pivot on common fields, and write detection rules that are agnostic to vendor formats.

    We support native integrations to pull security logs from popular SaaS, Cloud, Host, and Network sources and custom JSON/CSV/Text log sources. Matano includes a built-in log transformation pipeline that lets you easily parse and transform logs at ingest time using Vector Remap Language (VRL) without needing additional tools (e.g. Logstash, Cribl).

    Matano uses a detection-as-code approach which lets you use Python to implement realtime alerting on your log data, and lets you use standard dev practices by managing rules in Git (test, code review, audit). Advanced detections that correlate across events and alerts can be written using SQL and executed on a scheduled basis.

    We built Matano to be fully serverless using technologies like Lambda, S3, and SQS for elastic horizontal scaling. We use Rust and Apache Arrow for high performance. Matano works well with your existing data stack, allowing you to plug in tools like Tableau, Grafana, Metabase, or Quicksight for visualization and use query engines like Snowflake, Athena, or Trino for analysis.

    Matano is free and open source software licensed under the Apache-2.0 license. Our use of open table and common schema standards gives you full ownership of your security data in a vendor neutral format. We plan on monetizing by offering a cloud product that includes enterprise and collaborative features to be able to use Matano as a complete replacement to SIEM.

    If you're interested to learn more, check out our docs (https://matano.dev/docs), GitHub repository (https://github.com/matanolabs/matano), or visit our website (https://matano.dev).

    We’d love to hear about your experiences with SIEM, security data tooling, and anything you’d like to share!

  • Any recommendations for cloud siem? Our company is moving to cloud siem. Hope you can share the pros and cons. Any reference are highly appreciated. Thank you in advance
    1 project | /r/SIEM | 8 Jan 2023
    If you're interested in an open source SIEM option for AWS, check out a project I've been working on called Matano: https://github.com/matanolabs/matano
  • matano: Open source cloud-native security lake platform (SIEM alternative) for threat hunting, detection & response, and cybersecurity analytics at petabyte scale on AWS 🦀
    1 project | /r/blueteamsec | 30 Dec 2022
  • Extending Python with Rust via PyO3
    1 project | /r/rust | 27 Dec 2022

What are some alternatives?

When comparing peerreview and matano you can also consider the following projects:

reals - A lightweight python3 library for arithmetic with real numbers.

clickhouse-operator - Altinity Kubernetes Operator for ClickHouse creates, configures and manages ClickHouse clusters running on Kubernetes

typst - A new markup-based typesetting system that is powerful and easy to learn.

coldsnap - A command line interface for Amazon EBS snapshots

danielBingham

ebook-reader-dict - Finally decent dictionaries based on Wiktionary for your beloved eBook reader.

KeenWrite - Free, open-source, cross-platform desktop Markdown text editor with live preview, string interpolation, and math.

Benthos - Fancy stream processing made operationally mundane

tone - tone is a cross platform audio tagger and metadata editor to dump and modify metadata for a wide variety of formats, including mp3, m4b, flac and more. It has no dependencies and can be downloaded as single binary for Windows, macOS, Linux and other common platforms.

Jocko - Kafka implemented in Golang with built-in coordination (No ZK dep, single binary install, Cloud Native)

beets - music library manager and MusicBrainz tagger

aws-security-survival-kit - Bare minimum AWS Security Alerting and Configuration