goofys VS sgr

Compare goofys vs sgr and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
goofys sgr
16 22
5,037 326
- 0.6%
0.0 5.4
2 months ago 7 months ago
Go Python
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

goofys

Posts with mentions or reviews of goofys. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-19.
  • Is Posix Outdated?
    3 projects | news.ycombinator.com | 19 Oct 2023
    The author needs to ask themselves: in this cloud technology stack, is there POSIX involved somewhere lower down, where I can't access it? The answer is, of course, "yes". The sort of cloud storage systems described all run on top of POSIX APIs. They provide convenience (cost efficiency is more debatable) compared to the POSIX alternative, but that's because they exist at an entirely different conceptual layer (hence the presence of POSIX anyway, just buried).

    Your point about surfacing a POSIX that's actually there but hidden and thus visible to low-level Amazon employees building the S3 service which makes it invisible to S3 end customers is true but isn't the the point of the article. The author is saying there are motivations for a POSIX-like api visible also the end user.

    So your explanation of stack looks like 2 layers: POSIX api <-- AWS S3 built on top of that

    Author's essay is actually talking about 3 layers: POSIX <-- AWS S3 <-- POSIX

    That's why the blog post has the following links to POSIX-on-top-of-S3-objects :

    https://github.com/s3fs-fuse/s3fs-fuse

    https://github.com/kahing/goofys

    https://www.cuno.io/

  • AWS Announces Open Source Mountpoint for Amazon S3
    4 projects | news.ycombinator.com | 26 Mar 2023
    How is this different than these other solutions?

    https://github.com/kahing/goofys

    https://github.com/s3fs-fuse/s3fs-fuse

  • Introducing Mountpoint for Amazon S3 - A file client that translates local file system API calls to S3 object API calls like GET and LIST.
    4 projects | /r/aws | 14 Mar 2023
    But now I ask.. why not s3fs? Is it the GPL licensing? Or even goofys that also have Apache2 licensing and seems to hit similar goals (non fully POSIX compliant)? Why build your own?
  • Merge my S3 with Mac Finder Folder
    3 projects | /r/aws | 12 Nov 2022
  • Migrating instance to AWS GovCloud
    1 project | /r/aws | 1 Nov 2022
    If your 20TB is in S3, use a staging box with goofys (https://github.com/kahing/goofys) to mount the commercial S3 bucket(s) into a folder, then use s3 sync to copy to your bucket(s) in GovCloud.
  • How should I go about creating a program that holds various MP4 files?
    3 projects | /r/golang | 27 Aug 2022
  • Raft Consensus Animated
    2 projects | news.ycombinator.com | 16 Aug 2022
  • How do you manage large training datasets?
    1 project | /r/computervision | 2 Jun 2022
    So, we just need to change the dataloader function a bit to make this work then. Did you try just mounting S3 using https://github.com/kahing/goofys. In this case, we need not even change the dataloader code. Not sure of the performance though.
  • Mount S3 Objects to Kubernetes Pods
    2 projects | dev.to | 31 Jan 2022
    We're using goofys as the mounting utility. It's a "high-performance, POSIX-ish Amazon S3 file system written in Go" based on FUSE (file system in user space) technology.
  • What you gonna add to your selfhost stack this year?
    18 projects | /r/selfhosted | 2 Jan 2022
    will probably experiment with https://github.com/kahing/goofys and https://litestream.io/ to make services more easily moved between the devices :) Also, will continue working on https://synpse.net/ to make the operations easier.

sgr

Posts with mentions or reviews of sgr. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-01.
  • Show HN: Loofi – Our AI-Powered SQL Query Builder
    1 project | news.ycombinator.com | 21 May 2023
  • Release engineering is exhausting so here's cargo-dist
    12 projects | news.ycombinator.com | 1 Feb 2023
    I wrote up the details of this in a PR [0] where I last dealt with it.

    [0] https://github.com/splitgraph/sgr/pull/656

  • Ask HN: Serverless SQLite or Closest DX to Cloudflare D1?
    2 projects | news.ycombinator.com | 2 Jan 2023
    This is the vision of what we're building at Splitgraph. [0] You might be most interested in our recent project Seafowl [1] which is an open-source analytical database optimized for running "at the edge," with cache-friendly semantics making it ideal for querying from Web applications. It's built in Rust using DataFusion and incorporates many of the lessons we've learned building the Data Delivery Network [2] for Splitgraph.

    [0] https://www.splitgraph.com

    [1] https://seafowl.io

    [2] https://www.splitgraph.com/connect

  • Postgres Auditing in 150 lines of SQL
    10 projects | news.ycombinator.com | 9 Mar 2022
    You might like what we're doing with Splitgraph. Our command line tool (sgr) installs an audit log into Postgres to track changes [0]. Then `sgr commit` can write these changes to delta-compressed objects [1], where each object is a columnar fragment of data, addressable by the LTHash of rows added/deleted by the fragment, and attached to metadata describing its index [2].

    I haven't explored sirix before, but at first glance it looks like we have some similar ideas — thanks for sharing, I'm excited to learn more, especially about its application of ZFS.

    [0] https://www.splitgraph.com/docs/working-with-data/tracking-c...

    [1] https://www.splitgraph.com/docs/concepts/objects

    [2] https://github.com/splitgraph/splitgraph/blob/master/splitgr...

  • The world of PostgreSQL wire compatibility
    3 projects | news.ycombinator.com | 10 Feb 2022
    Shameless plug, but your list is missing Splitgraph [0] :)

    We’ve been based on Postgres from the beginning, and although the backend is a bit more complex at this point, we’ve kept the wire protocol intact. We’re also heavily invested in FDWs, not only for federated queries (e.g. querying data at Snowflake – btw, you might enjoy our blog post on achieving a 100x speedup with aggregation pushdown), but also for queries on warehoused data stored as Splitgraph images. By keeping Postgres compatibility as our guiding constraint, we’ve been able to build a lot of functionality on top of just a few simple abstractions. The result is something akin to a magic Postgres database – you can connect dozens of live sources to it using FDW plugins, or you can ingest from hundreds data sources using Airbyte connectors, ultimately storing the data as immutable Splitgraph images in object storage.

    As for the wire protocol, our implementation is heavily reliant on (a forked version of) PgBouncer. Basically, a query arrives, we parse it for references to tables (which look like Docker image tags), and the proxy layer performs whatever orchestration is necessary to satisfy the query. That could mean instantiating a foreign server to a saved connection, loading some data from object storage, or even lazily loading only the requisite data (we call this “layered querying” since it’s implemented similarly to AUFS). In the future, it could also mean delegating the query to a more specialized engine like Presto.

    Point is, by keeping the frontend intact, we’re able to retain compatibility with all Postgres clients, but we’re free to implement the backend in more scalable or domain specific ways. For example, we’re able to horizontally scale our query capacity by simply adding more “cache nodes” that perform the layered querying.

    We are definitely all-in on the Postgres wire protocol, and all the ecosystem compatibility that comes along with it. You can read our blog for more in depth discussions of this, but I don’t want to spam too many links here. :)

    [0] https://www.splitgraph.com

    [1] https://www.splitgraph.com/blog/postgresql-fdw-aggregation-p...

  • Scalable PostgreSQL Connection Pooler
    11 projects | news.ycombinator.com | 12 Nov 2021
    We are building a solution for this problem at Splitgraph [0] – it sounds like we could probably help with your use case. You can get it to work yourself with our open source code [1], but our (private beta, upcoming public) SaaS service will put all your schemas on a more scalable “data delivery network,” which incidentally, happens to be implemented with PgBouncer + rewriting + ephemeral instances. In a local engine (just a Postgres DB managed by Splitgraph client to add extra stuff), there is no PgBouncer, but we use Foreign Data Wrappers to accomplish the same.

    On Splitgraph, every dataset – and every version of every dataset – has an address. Think of it like tagged Docker images. The address either points to an immutable “data image” (in which case we can optionally download objects required to resolve a query on-the-fly, although loading up-front is possible too) or to a live data source (in which case we proxy directly to it via FDW translation). This simple idea of _addressable data products_ goes a long way – for example, it means that computing a diff is now as simple as joining across two tables (one with the previous version, one with the new).

    Please excuse the Frankenstein marketing site – we’re in the midst of redesign / rework of info architecture while we build out our SaaS product.

    Feel free to reach out if you’ve got questions. And if you have a business case, we have spots available in our private pilot. My email is in my profile – mention HN :)

    [0] https://www.splitgraph.com/connect

    [1] examples: https://github.com/splitgraph/splitgraph/tree/master/example...

  • Ask HN: How to get compeitors to use our open source interop-prototcol?
    4 projects | news.ycombinator.com | 4 Oct 2021
    Federated data sharing is the core use case of the magic Postgres database we’re building at Splitgraph [0]. We’d love to help you solve these problems! The ideas you’re describing are exactly what we want to achieve – data sharing should be as easy as changing a connection string in a SQL client. It sounds like your use case would be a good fit for what we’re building. If you’d like to learn more, please send me a note – email in profile.

    [0] https://www.splitgraph.com

  • Cloudera taken private for $5.3b, acquires Datacoral and Cazena
    2 projects | news.ycombinator.com | 1 Jun 2021
    The data industry continues to hype this idea of “multi-cloud,” but then the “modern data stack” is centralized around a single warehouse and nobody sees any irony in that.

    The big bet we’re making at Splitgraph [0] is that the next wave of data engineering will take a more decentralized, “data mesh” type approach to enterprise architecture. “Data gravity” really does exist -expensive to move, in terms of both cost and operational complexity. So instead of bringing the data to the query, why not bring the query to the data? All we need for that is a set of read only credentials.

    Cloudera mentions they bought DataCoral to help with data integration and connectors. They’ve correctly identified the problem - data sprawl and fragmentation will inevitably grow - but I’m not sure they have the right solution.

    Data integration is important, but it’s a moving target, which is why it calls for a collaborative open source solution. This is why so many new startups, like AirByte most recently, are coalescing around the Singer taps that Stitch left behind after its acquisition by Talend.

    We also support using Singer taps to ingest data into versioned Splitgraph images [1], so we’re excited to see more collaboration on maintenance of taps. For us it’s a useful feature, but it should be just that — a feature. Is there really a need to replicate all of your data before you can even query it? Or would you rather experiment by directly querying its source?

    [0] https://www.splitgraph.com

    [1] unreleased and undocumented atm, but it does work. We’re hiring, especially on the frontend if you want to help build the web UI. See profile.

  • Google Dataset Search
    1 project | news.ycombinator.com | 6 May 2021
    On the public DDN (data.splitgraph.com:5432), we enforce a (currently arbitrary) 10k row limit on responses. You can construct multiple queries using LIMIT and OFFSET, or you can run a local Splitgraph engine without a limit. We also have a private beta program if you want a managed or self-hosted deployment. And we are planning to ship some features for "export to csv" type use cases (potentially other output formats too).

    For live/external data, we proxy the query to the data source, so there is no theoretical data size limit except for any defined by the upstream.

    For snapshotted data, we store the data as fragments in object storage. Any size limit depends on the machine where Splitgraph's Postgres engine is running, and how you choose to materialize the data when downloading it from object storage. You can "check out" an entire image to materialize it locally, at which point it will be like any other Postgres schema. Or you can use "layered querying" which will return a result set while only materializing the fragments necessary to answer the query.

    Regarding ClickHouse, you could watch this presentation [0] my co-founder Artjoms gave at a recent ClickHouse meet-up on the topic of your question. We also have specific documentation for using the ClickHouse ODBC client with the DDN [1], as well as an example reference implementation. [2]

    [0] https://www.youtube.com/watch?v=44CDs7hJTho

    [1] https://www.splitgraph.com/connect

    [2] https://github.com/splitgraph/splitgraph/tree/master/example...

  • Ask HN: Who is hiring? (April 2021)
    21 projects | news.ycombinator.com | 1 Apr 2021
    Splitgraph (https://www.splitgraph.com) | Remote | Full-time

    Splitgraph is reshaping how organizations interact with data. We provide a unified interface to discover and query data. In practice, this means we're building a data catalog (a web app) and query layer (implemented with the Postgres wire protocol).

    We're a seed-stage, venture-funded startup hiring our initial team. The two co-founders are looking to grow the team by adding multiple engineers across the stack. This is an opportunity to make a big impact on an agile team while working closely with the founders.

    Splitgraph is a remote-first organization. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.

    Open positions:

    * Senior Software Engineer - Frontend. Responsible for the web stack, mainly involving Typescript, React, Next.js, Postgraphile, etc.

    * Senior Software Engineer - Backend. Responsible for a variety of core services, using Python, Poetry, Postgres, C, Lua, and a ton of other technologies.

    Learn more & apply: https://www.notion.so/splitgraph/Splitgraph-is-Hiring-25b421...

What are some alternatives?

When comparing goofys and sgr you can also consider the following projects:

s3fs-fuse - FUSE-based file system backed by Amazon S3

haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

rclone - "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Azure Blob, Azure Files, Yandex Files

parabol - Free online agile retrospective meeting tool

gcsfuse - A user-space file system for interacting with Google Cloud Storage

Baserow - Open source no-code database and Airtable alternative. Create your own online database without technical experience. Performant with high volumes of data, can be self hosted and supports plugins

juicefs - JuiceFS is a distributed POSIX file system built on top of Redis and S3.

dremio-oss - Dremio - the missing link in modern data

catfs - Cache AnyThing filesystem written in Rust

django-pgviews - Fork of django-postgres that focuses on maintaining and improving support for Postgres SQL Views.

s3fs - S3 Filesystem

pgbouncer-fast-switchover - Adds query routing and rewriting extensions to pgbouncer