dremio-oss
Our great sponsors
kaggle-environments | dremio-oss | |
---|---|---|
55 | 8 | |
273 | 1,301 | |
1.5% | 1.2% | |
6.6 | 4.0 | |
about 2 months ago | 8 days ago | |
Jupyter Notebook | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kaggle-environments
- Data Science Roadmap with Free Study Material
-
Help needed! My first hackathon
If you are interested in Data Science, you may want to look at Kaggle competitions. https://www.kaggle.com/competitions
- What's a statistical / research methodology, that's not usually taught in grad programs, that you think more IO's should be aware about?
-
Freaking out about how I’m inexperienced to land an internship and eventually a job
Secondly, if you feel like you do not have enough skills or a lack of practice answering problem statements, there are a lot of good websites where you can find interesting projects. I would recommend starting participating in some Kaggle competitions or download some free Google datasets and start playing with them.
-
Capitalism provides half-assed solutions to extinction-level problems caused by capitalism
For reference: Kaggle is a Google product. You can see the list of current competitions here.
- Where can neural networks take me? - Semi-existential crisis
-
What Can I Do With My Time as a Substitute for Strategy Computer Games?
You could try Kaggle competitions, or participating in forecasting markets (as you stated) is another option. You don't need any specific skill set to be a forecaster, the rules of the bet are stipulated and from there it's just based on your ability to predict the outcome. You could also try your hand at investing in the stock market, or try and make money betting on sports games. If you're very good at this stuff I'm sure you can make a lot of money doing it. The thing to keep in mind is that generally video games are much much easier than real life
-
What is the best advanced professional certification for Data Science/ML/DL/MLOps?
As to the specifics of your projects, that's up to you. Try browsing Kaggle; check out some of the work we have on The Pudding; check out some journalism examples to see what you can try to build on or improve.
- Suggestions for projects on kaggle for cv?
-
Hi! Im doing research on AI innovation. Does anybody know any specific platform where I can learn/understand and get case studies or on-going projects that companies are implementing? Thanks for your help!
You might want to look at kaggle competitions.
dremio-oss
-
What is the separation of storage and compute in data platforms and why does it matter?
Dremio - Dremio is a data lakehouse based on the open-source Apache Iceberg table format. It offers different compute instances to process data that lives in your S3 bucket. You pay for S3 storage independently.
-
What is dremio query engine
Dremio core is actually fully open source: https://github.com/dremio/dremio-oss
-
Q – Run SQL Directly on CSV or TSV Files
I have been using Dremio to query large volume of CSV files: https://docs.dremio.com/software/data-sources/files-and-dire...
Although having them in some columnar format is much better for fast responses.
GitHub: https://github.com/dremio/dremio-oss
-
Hands-On Introduction to Apache Iceberg - Data Lakehouse Engineering
As a Developer Advocate for Dremio I spend a lot of time doing research on technology and best practices around engineering Data Lakehouses and sharing what I learn through content for Subsurface - The Data Lakehouse Community. One of the major topics I've been diving deep into is the topic of Data Lakehouse Table Formats, these allow you to take the files on your data lake and group them into tables data processing engines like Dremio can operate on.
-
Introduction to The World of Data - (OLTP, OLAP, Data Warehouses, Data Lakes and more)
Hearing about all these components sounds great, but what everyone wants isn't to have to setup and configure all these components but instead have a platform and tool that brings this all together in an easy to use package, and that platform is Dremio. With Dremio you can work with the data directly from your data lake. No copies, easy access, high performance.
-
Data Lakehouse and Delta Lake
And as u/pych_phd said, it's not just Databricks, Snowflake and Azure who make these claims, even AWS, GCP, Dremio and I'm sure many others are too.
-
Data Science Competition
Dremio
-
Build your own “data lake” for reporting purposes
For my home projects I generate parquet (columnar and very well suited for DW like queries) files with pyarrow and use https://github.com/dremio/dremio-oss (https://www.dremio.com/on-prem/) to query them on lake (minio or just local disk or s3) and use Apache Superset for quick charts or dashboards.
What are some alternatives?
CKAN - CKAN is an open-source DMS (data management system) for powering data hubs and data portals. CKAN makes it easy to publish, share and use data. It powers catalog.data.gov, open.canada.ca/data, data.humdata.org among many other sites.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
presto - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io) [Moved to: https://github.com/trinodb/trino]
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
docarray - Represent, send, store and search multimodal data
Greenplum - Greenplum Database - Massively Parallel PostgreSQL for Analytics. An open-source massively parallel data platform for analytics, machine learning and AI.
datasci-ctf - A capture-the-flag exercise based on data analysis challenges
Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Rakam - 📈 Collect customer event data from your apps. (Note that this project only includes the API collector, not the visualization platform)