vision_transformer
nerfstudio
vision_transformer | nerfstudio | |
---|---|---|
7 | 10 | |
9,287 | 8,533 | |
2.2% | 2.2% | |
5.5 | 9.6 | |
about 2 months ago | 1 day ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vision_transformer
-
Can I use CLIP to tag my picture collection?
And one last thing, should I even be thinking of using CLIP for these tasks when Google has released a better model here: https://github.com/google-research/vision_transformer/blob/main/model_cards/lit.md
-
When the client's management is happy but their dev team is a pain
Google's vision transformers are type hinted.
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
We’re going to look at a model that Open AI has trained with a broad multilingual dataset: The xlm-roberta-base-ViT-B-32 CLIP model, which uses the ViT-B/32image encoder, and the XLM-RoBERTa multilingual language model. Both of these are pre-trained:
-
[R] How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
JAX Code: https://github.com/google-research/vision_transformer
- [D] (Paper Overview) MLP-Mixer: An all-MLP Architecture for Vision
-
[P] Animesion: a framework, for anime (and related) character recognition. It uses Vision Transformers trained on a subset of Danbooru2018, that we rebranded as DAF:re, and can classify a given image into one of more than 3000 characters! Source code and checkpoints included.
For this project I used the pretrained models released by Google in Jax, using this particular PyTorch custom implementation. Those were pretrained on ImageNet21k with 14 M images among 21 K classes. Then yes I finetune on two datasets: one with 15 K images and 170 characters, and one with 3 K characters and almost 500 K images.
- Short term memory solutions for video tasks?
nerfstudio
-
Smerf: Streamable Memory Efficient Radiance Fields
You’re under the right paper for doing this. Instead of one big model, they have several smaller ones for regions in the scene. This way rendering is fast for large scenes.
This is similar to Block-NeRF [0], in their project page they show some videos of what you’re asking.
As for an easy way of doing this, nothing out-of-the-box. You can keep an eye on nerfstudio [1], and if you feel brave you could implement this paper and make a PR!
[0] https://waymo.com/intl/es/research/block-nerf/
[1] https://github.com/nerfstudio-project/nerfstudio
- Researchers create open-source platform for Neural Radiance Field development
-
first attempt to photogrammetry using DJI mini 2 and metashape. 460 images manual. What did I do wrong? What can i do to improve it? Would appreciate all kinds of advice to a newbie
Try rendering NERFs with your footage, you're gonna love the result and NERFs are pretty robust to reflections. You can use your Metashape solve for Nerf Studio https://github.com/nerfstudio-project/nerfstudio
-
What is the best way to create a dataset for NeRF?
Beyond these tips, I don't have much. There's lots of research about how to improve quality of solves in the software itself. I'm hoping these get added to instant-ngp, since it's fast and free, but it is research software, not a product, so we'll see. Another thing to maybe look at is Nerfstudio. It can use instant-ngp as a solver, but there are other solvers. I briefly tried it but couldn't figure out how it worked, from the small bit of time I spent with it. I hope to get back to it.
- Nerfstudio – A collaboration friendly studio for NeRFs
- When the client's management is happy but their dev team is a pain
- A collaboration friendly studio for NeRFs
-
NeRF ➜ point cloud export — now available via nerfstudio
nerf.studio | github | discord
- Show HN: A collaboration friendly studio for NeRFs
What are some alternatives?
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
multinerf - A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
TorchSharp - A .NET library that provides access to the library that powers PyTorch.
Fashion12K_german_queries
sdfstudio - A Unified Framework for Surface Reconstruction
smerf-3d
fashion-200k - Fashion 200K dataset used in paper "Automatic Spatially-aware Fashion Concept Discovery."
kaolin-wisp - NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
typeshed - Collection of library stubs for Python, with static types
CIPS-3D - 3D-aware GANs based on NeRF (arXiv).