nvdiffrec
transformers
nvdiffrec | transformers | |
---|---|---|
13 | 176 | |
2,053 | 125,369 | |
1.1% | 1.7% | |
3.2 | 10.0 | |
2 days ago | 1 day ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nvdiffrec
-
[D] Found top conference papers using test data for validation.
It depends on which CV research you’re in. In NeRF view synthesis, it’s pretty common to use test sets as validation sets. This has been done in several papers, including oral papers.
-
3D NeRF of a footstool
I think there came a paper recently nerf2mesh, which I still have to evaluate (but haven't found time yet). There's also https://github.com/NVlabs/nvdiffrec/. And there's cool easy-to-use research software like nerfstudio (at least if you compare it to a lot of the raw code releases from research papers).
-
Fitting the texture from an image to the corresponding 3D model
For your use case, why is your model devoid of texture? You can try 3D scanning your desired object so that it comes with texture. Either that or use Nvidia's MoMA here to get your object from images.
- WHAT IS THE PROBLEM ???? HELP ME PLZ!!
- Blender animation augmented with AI
-
[R] BUNGEENeRF: progressive neural radiance field for extreme multi-scale scene rendering
Have you seen this project: https://github.com/NVlabs/nvdiffrec (I haven't tried it). Also videos tend to have compression. If you can get images you'll get higher quality results with most photogrammetry software. Projects like meshroom are probably better for this if you have high quality pictures. There's a few articles that cover high quality scans that can help also.
-
is NeRF photogrammetry? please don't call me old, but this technology, in my mind, does not fit the strict concept.
You can generate an accurate mesh from a NeRF: https://github.com/NVlabs/nvdiffrec, and measure from that.
-
NeRF export options and pgrammetry application question
NERF specifically generates a radiance field, but there are research codes for turning that into a mesh (https://github.com/NVlabs/nvdiffrec) (not easy to use yet)
-
[D] nvdiffrec setup
Hi, I'm not sure if this is the right place, but I was looking into seeing what the latest photo to model reconstruction looks like from here with NVIDIA (ArXiV paper is included there). There's a couple of neat examples, and after one dumb mistake setup was pretty easy. However, the meshes are not converging except very loosely when using the examples from the paper.
-
nvdiffrec tutorial?
Hi everyone! I'm not sure this is the right place to ask, but I've been drooling over these cool ml and deep learning techniques showcased in videos. I was wondering if anyone could help me out in getting something like nvdiffrec to work with my own sample. https://github.com/NVlabs/nvdiffrec
transformers
-
AI enthusiasm #9 - A multilingual chatbot📣🈸
transformers is a package by Hugging Face, that helps you interact with models on HF Hub (GitHub)
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
What are some alternatives?
nvdiffrast - Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
differentiable_volumetric_rendering - This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
llama - Inference code for Llama models
yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
curated-list-of-awesome-3D-Morphable-Model-software-and-data - The idea of this list is to collect shared data and algorithms around 3D Morphable Models. You are invited to contribute to this list by adding a pull request. The original list arised from the Dagstuhl seminar on 3D Morphable Models https://www.dagstuhl.de/19102 in March 2019.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
huggingface_hub - The official Python client for the Huggingface Hub.
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch