timeliner
sql.js
timeliner | sql.js | |
---|---|---|
5 | 43 | |
3,550 | 12,234 | |
- | 0.8% | |
4.0 | 6.5 | |
4 months ago | 13 days ago | |
Go | JavaScript | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
timeliner
-
I Ditched Google Photos
Heya! I'm the author of PhotoStructure, and my Google Photos account (before I started working on PhotoStructure) is about that size, too.
I wrote up some tips here: https://photostructure.com/faq/takeout/
This is what I did:
1. First try to fetch all your Google Photos via Takeout in one archive. If it fails (like it did for me), try different-sized .tgz archives. I had to use the 10 Gb option (using 50gb caused an internal-to-google error).
If that fails to work, the last resort is to manually create by-year albums, shove all photos from that year into that album, and do a takeout of just that album. Repeat as necessary for every year.
2. Install an app on your phone to *directly* upload the original photos and videos from your phone to your NAS/home server. I have several recommended apps here: https://photostructure.com/faq/how-do-i-safely-store-files/#...
At this point, you can still use Google Photos (for viewing and as a last-ditch backup), but your originals are safe (without all the Google Photo downsampling and metadata shenanigans), and you're free to use whatever self-hosted software you want (like PhotoStructure, but there are a ton of alternatives, as well).
FWIW, I also tried this software: https://github.com/mholt/timeliner -- it does what it can, but the files you get via the API has a bunch of metadata stripped from it. I even had captured-at times get mangled with older photos.
-
Start Self Hosting
This is why I'm building Timelinize [1]. It's a follow-up to my open source Timeliner project [2], which has the potential to download all your digital life onto your own computer locally, and projects it all onto a single timeline, across all data sources (text messages, social media sites, photos, location history, and more).
It's a little different from "self hosting" but it does have a similar effect of bringing all your data home and putting it in your control.
The backend and underlying processing engine is all functional and working very well; now I'm just getting the UI put together, so I hope to have something to share later this year.
[1]: https://twitter.com/timelinize (website coming eventually)
[2]: https://github.com/mholt/timeliner
-
Consider SQLite
Not a "big project/service" but a Go project that uses Sqlite is one of my own, Timeliner[1] and its successor, Timelinize[2] (still in development). Yeah the cgo dependency kinda sucks but you don't feel it in code, just compilation. And it easily manages Timeline databases of a million and more entries just fine.
[1]: https://github.com/mholt/timeliner
[2]: https://twitter.com/timelinize
-
Can you synchronise Google photos to/from phones and computer bidirectionally?
This looks promising but might be a bit complicated for you: https://github.com/mholt/timeliner
-
What is the equivalent of "Apple removed 3.5mm jack" of your favorite products ?
I made Timeliner to download my Google Photos: https://github.com/mholt/timeliner -- requires some tech prowess for now, though.
sql.js
-
Show HN: Appendable – Index JSONL data and query via CDN
Hi HN! A friend and I were inspired by projects like https://github.com/sql-js/sql.js and the idea of querying files served over CDN with HTTP range requests. We started thinking: what would a database that was specifically designed for this type of use case look like? So we started building one, and we landed on a functional prototype that we're pretty proud of!
With our prototype, Appendable, we're able to serve and query large (GB+) datasets by hosting them on a static file host like Amazon S3 or Cloudflare R2 without running a separate server and worrying about things like tail latency, replication, and connection pooling -- all that is handled for us by the file hoster.
Additionally, one tenet that we have been following is Appendable won't touch your underlying data, so your jsonl file is preserved and we point at that data instead of consuming it into an Appendable-specific file format. This keeps your data yours and makes it easy to introspect the data: just open it up with your favorite editor aka vim.
We're curious what you think, we're excited to build this out further to get the performance even better and add features like pubsub. Everything is open source at https://github.com/kevmo314/appendable.
Kevin and Matthew
- How to show CRUD projects on Github?
-
I made a website where you can use SQLite in your browser
My project is powered by sql.js, I recommend checking that out if you're interested - https://github.com/sql-js/sql.js/
-
How to build interactive way to learn SQL using Next.js and database?
Maybe you can try to use some SQL database compiled as Web Assembly Modules? Like this one for example: https://github.com/sql-js/sql.js
-
Recommendations for data structure and storage
If you want to have persistence, then I would go with a database like Dexie, as it uses IndexedDB and has transactions. If you just want something that's in memory, you could look at Sql.js or something simple like lowdb.
-
I have a large JSON object (~2GB), what's the best way to make a site that lets you search through it and display the results without crashing?
not necessarily. you can host an html/js/sqlite site on github pages for free. json -> sqlite3 js -> sql
-
new release of : https://sql.js.org/
Link: https://sql.js.org
- Web-Projekt - Hilfe, weil ich nicht weiß, was ich benötige :S
- Learn Postgres at the Playground
-
Show HN: CSVFiddle – Query CSV files with DuckDB in the browser
Does it work with really large files? Like, >100mb or so. I was considering making something similar but with sqlite.js [1], but the problem with it is that it loads everything in memory, so I wasn't entirely sure how it will deal with larger workloads.
[1]: https://sql.js.org/#/
What are some alternatives?
CasaOS - CasaOS - A simple, easy-to-use, elegant open-source Personal Cloud system.
localForage - 💾 Offline storage, improved. Wraps IndexedDB, WebSQL, or localStorage using a simple but powerful API.
EverythingToolbar - Everything integration for the Windows taskbar. [Moved to: https://github.com/srwi/EverythingToolbar]
LokiJS - javascript embeddable / in-memory database
MarkdownSite - Create a website from a git repository in one click
PouchDB - :koala: - PouchDB is a pocket-sized database.
yunohost - YunoHost is an operating system aiming to simplify as much as possible the administration of a server. This repository corresponds to the core code, written mostly in Python and Bash.
WatermelonDB - 🍉 Reactive & asynchronous database for powerful React and React Native apps ⚡️
PowerToys - Windows system utilities to maximize productivity
DB.js - db.js is a wrapper for IndexedDB to make it easier to work against
PhotoPrism - AI-Powered Photos App for the Decentralized Web 🌈💎✨
litestream - Streaming replication for SQLite.