bbox-visualizer
nuscenes-devkit
Our great sponsors
bbox-visualizer | nuscenes-devkit | |
---|---|---|
2 | 4 | |
374 | 2,111 | |
- | 3.9% | |
4.8 | 5.1 | |
2 months ago | 4 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bbox-visualizer
-
Community mingling live event, autonomous driving lecture, job opening, meet the member and more (Announcements 04.03.2021)
Meet the member - Shoumik Sharar Chowdhury. Shoumik and I had several talks the past months, he build the git project bbox-visualizer - This lets researchers draw bounding boxes and then labeling them easily with a stand-alone package. (The blog post)
-
Meet the member - Shoumik Sharar Chowdhury
Another project I've worked on is the bbox-visualizer. This lets researchers draw bounding boxes and then labeling them easily with a stand-alone package. The code is very accessible and so I would encourage any open-source enthusiasts to contribute to the project. This would also be a good place to start for beginners who are just starting out with computer vision/open-source.
nuscenes-devkit
-
Projecting Pointcloud/depth into RGB image (instead of giving color to a pointcloud)
This code might be helpful: https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/nuscenes.py#L863
-
Teaching cars to see at scale - Computer Vision at Motional - Dr. Holger Caesar (Author of nuScenes and COCO-Stuff datasets) - Link to zoom lecture by the author in comments
nuScenes: A multimodal dataset for autonomous driving (CVPR 2020) arxiv: https://arxiv.org/abs/1903.11027 git: https://github.com/nutonomy/nuscenes-devkit
-
Community mingling live event, autonomous driving lecture, job opening, meet the member and more (Announcements 04.03.2021)
nuScenes: A multimodal dataset for autonomous driving (CVPR 2020) - git
-
[R] Teaching cars to see at scale - Dr. Holger Caesar (Author of nuScenes and COCO-Stuff datasets) - Link to zoom lecture by the author in comments
nuScenes: A multimodal dataset for autonomous driving (CVPR 2020) arxiv: https://arxiv.org/abs/1903.11027 git: https://github.com/nutonomy/nuscenes-devkit
What are some alternatives?
coco-viewer - Minimalistic COCO Dataset Viewer in Tkinter
second.pytorch - PointPillars for KITTI object detection
Unsupervised-Attention-guided-Image-to-Image-Translation - Unsupervised Attention-Guided Image to Image Translation
diffgram - The AI Datastore for Schemas, BLOBs, and Predictions. Use with your apps or integrate built-in Human Supervision, Data Workflow, and UI Catalog to get the most value out of your AI Data.
magsac - The MAGSAC algorithm for robust model fitting without using an inlier-outlier threshold
painting - Implementation of PointPainting
globox - A package to read and convert object detection datasets (COCO, YOLO, PascalVOC, LabelMe, CVAT, OpenImage, ...) and evaluate them with COCO and PascalVOC metrics.
graph-cut-ransac - The Graph-Cut RANSAC algorithm proposed in paper: Daniel Barath and Jiri Matas; Graph-Cut RANSAC, Conference on Computer Vision and Pattern Recognition, 2018. It is available at http://openaccess.thecvf.com/content_cvpr_2018/papers/Barath_Graph-Cut_RANSAC_CVPR_2018_paper.pdf
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement