quarto-cli
pytorch-image-models
quarto-cli | pytorch-image-models | |
---|---|---|
8 | 35 | |
3,304 | 29,828 | |
3.5% | 1.2% | |
10.0 | 9.4 | |
6 days ago | 1 day ago | |
JavaScript | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quarto-cli
- FLaNK AI Weekly 18 March 2024
-
Quarto
Hello, I have a rather specific question.
I want to write a detailed tutorial (as HTML page) and a condensed version of it (as Reveal JS slides) from a single document.
I have found this suggestion[1] to specify the separate output file name for the slides in the header, and `quarto render myfile.qmd` will generate both.
Is there a way to include content (long form text, code, or images) that will only be exported in the HTML page but not in the slides (where space is more limited)?
[1] https://github.com/quarto-dev/quarto-cli/discussions/1751
-
Running Quarto Markdown in Docker
āÆ docker build -t cavo789/quarto . [+] Building 208.2s (13/13) FINISHED docker:default => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 2.08kB 0.0s => [internal] load metadata for docker.io/eddelbuettel/r2u:20.04 3.4s => CACHED [ 1/10] FROM docker.io/eddelbuettel/r2u:20.04@sha256:133b40653e0ad564d348f94ad72c753b97fb28941c072e69bb6e03c3b8d6c06e 0.0s => [ 2/10] RUN set -e -x && apt-get update && apt-get install -y --no-install-recommends pandoc pandoc-citeproc curl gdebi-core librsvg2-bin python3.8 47.6s => [ 3/10] RUN set -e -x && install.r shiny jsonlite ggplot2 htmltools remotes renv knitr rmarkdown quarto 27.2s => [ 4/10] RUN set -e -x && curl -o quarto-linux-amd64.deb -L https://github.com/quarto-dev/quarto-cli/releases/download/v1.4.529/quarto-1.4.529-linux-amd64.deb && gdebi - 12.1s => [ 5/10] RUN set -e -x && groupadd -g 1000 -o "quarto" && useradd -m -u 1000 -g 1000 -o -s /bin/bash "quarto" 0.5s => [ 6/10] RUN set -e -x && quarto install tool tinytex --update-path 23.0s => [ 7/10] RUN set -e -x && printf "\e[0;105m%s\e[0;0m\n" "Run tlmgr update" && ~/.TinyTeX/bin/x86_64-linux/tlmgr update --self --all && ~/.TinyTeX/bin/x86_64-linux/fm 77.9s => [ 8/10] RUN set -e -x && printf "\e[0;105m%s\e[0;0m\n" "Run tlmgr install for a few tinyText packages (needed for PDF conversion)" && ~/.TinyTeX/bin/x86_64-linux/tlmgr 11.7s => [ 9/10] RUN set -e -x && mkdir -p /input 0.5s => exporting to image 4.0s => => exporting layers 4.0s => => writing image sha256:fe1d20bd71a66eb574ba1f5b35c988ace57c2c30f93159caa4d5de2f8c490eb0 0.0s => => naming to docker.io/cavo789/quarto 0.0s What's Next? View summary of image vulnerabilities and recommendations ā docker scout quickview
-
Quarto document rendered via quarto::quarto_render(): How to implement citations?
I had some trouble following this but I think what you're saying is the ` [@Bernhofer2021.02.23.432527]` tag isn't getting converted to the actual bib reference - is that right? I just copied this into my system and I could make that part work fine - using my own .bib file of course, and I used this csl which I copied locally. The one change I made to the setup was to put both the .bib and the .csl file in my working directory where the .qmd file is, and also as I commented on a different post of yours from the other day, I make sure there's no spaces in the path to my working directory (for either the folder names or the filenames). So for me, everything is in C:\Users\xxxx\workingdir - this is due to a known RStudio issue with spaces. Who knows if that's what you're running into or not.
-
Quarto: Mermaid rendering in word: code-execution halts after format is generated, waiting indefinitely for a chrome-process to close
You should ask in the Quarto discussion group on their GitHub. They are extremely reactive if you can give a MWE.
- quarto-cli: Open-source scientific and technical publishing system built on Pandoc.
- The Jupyter+Git problem is now solved
pytorch-image-models
- FLaNK AI Weekly 18 March 2024
-
[D] Hugging face and Timm
I am a PyTorch user I work in CV, I usually use the PyTorch models. However, I see people use timm in research papers to train their models I don't understand what it is timm is it a new framework like PyTorch? Further, when I click https://pypi.org/project/timm/ homepage it takes me to hugging face GitHub https://github.com/huggingface/pytorch-image-models is there any connection between timm and hugging face many of my friends use hugging face but I also don't know about hugging face I use simple PyTorch and torchvision.models.
-
FLaNK Stack Weekly for 07August2023
https://github.com/huggingface/pytorch-image-models https://huggingface.co/docs/timm/index
-
[R] Nvidia RTX 4090 ML benchmarks. Under QEMU/KVM. Image + Transformers. FP16/FP32.
pytorch-image-models
-
Inference on resent, cant work out the problem?
additionally, you might find the timm library handy for this sort of work.
-
Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows
This is still being pursued. Ross Wightmann's timm[0,1] package (now on Hugging Face) has done a lot of this. There's also a V2 of ConvNext[2]. Ross does write about this a lot on Twitter fwiw. I should also mention that there are still many transformer based networks that still beat convs. So there probably won't be a resurgence in convs until someone can show that there's a really strong reason for them. They have some advantages but they also might not be flexible enough for the long range tasks in segmentation and detection. But maybe they are.
FAIR definitely did great work with ConvNext, and I do hope to see more. There always needs to be people pushing unpopular paradigms.
[0] https://github.com/huggingface/pytorch-image-models
[1] https://arxiv.org/abs/2110.00476
[2] https://arxiv.org/abs/2301.00808
-
Problems with Learning Rate Finder in Pytorch Lightning
I am doing Binary classification with a pre-trained EfficientNet tf_efficientnet_l2. I froze all weights during training and replaced the classifier with a custom trainable one that looks like:
-
PyTorch at the Edge: Deploying Over 964 TIMM Models on Android with TorchScript and Flutter
In this post, Iām going to show you how you can pick from over 900+ SOTA models on TIMM, train them using best practices with Fastai, and deploy them on Android using Flutter.
-
ImageNet Advise
The other thing is, try to find tricks to speed up your experiments (if not having done so already). The most obvious are mixed precision training, have your model train on a lower resolution input first and then increase the resolution later in the training, stochastic depth, and a bunch more stuffs. Look for implementations in https://github.com/rwightman/pytorch-image-models .
- Doubt about transformers
What are some alternatives?
jupyter-book - Create beautiful, publication-quality books and documents from computational content.
yolov5 - YOLOv5 š in PyTorch > ONNX > CoreML > TFLite
ipyflow - A reactive Python kernel for Jupyter notebooks.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
Pluto.jl - š Simple reactive notebooks for Julia
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
jupyterlab-git - A Git extension for JupyterLab
mmcv - OpenMMLab Computer Vision Foundation
github-orgmode-tests - This is a test project where you can explore how github interprets Org-mode files
segmentation_models.pytorch - Segmentation models with pretrained backbones. PyTorch.
jupyter - An interface to communicate with Jupyter kernels.
yolact - A simple, fully convolutional model for real-time instance segmentation.