xrays-and-gradcam
adaptnlp
xrays-and-gradcam | adaptnlp | |
---|---|---|
3 | 2 | |
47 | 414 | |
- | 0.0% | |
0.0 | 0.0 | |
about 3 years ago | over 2 years ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xrays-and-gradcam
- Diagnose COVID-19 from X-Rays with AI
-
Classification and Gradient-based Localization of Chest Radiographs
Github Repository: https://github.com/priyavrat-misra/xrays-and-gradcam
-
[D] The future of open-source AI
I did this project a while back, which follows an alternate and quicker approach for COVID-19 diagnosis than RT-PCR. Recently, DRDO's Centre for Artificial Intelligence and Robotics (CAIR) developed a tool following the same approach (source).
adaptnlp
-
Tools to use for Semantic-searching Question Answering System
Check out adaptnlp
-
Case Sensitivity using HuggingFace & Google's T5 model (base)
Yes, there are capitals in the tokenizer vocabulary of t5-base and t5-small, so both support capitalization. A few days ago I was using t5-small through adaptnlp for extractive summarization and capitalization was working fine (https://github.com/Novetta/adaptnlp). AdaptNLP is basically just a transformers wrapper, so if you can't figure out a solution, you could just dissect their source code.
What are some alternatives?
pytorch-GAT - My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Basic-UI-for-GPT-J-6B-with-low-vram - A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
COVID-CT - COVID-CT-Dataset: A CT Scan Dataset about COVID-19
keytotext - Keywords to Sentences
gan-vae-pretrained-pytorch - Pretrained GANs + VAEs + classifiers for MNIST/CIFAR in pytorch.
fastai - The fastai deep learning library
BLOOM-fine-tuning - Finetune BLOOM
gector - Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
browser-ml-inference - Edge Inference in Browser with Transformer NLP model
Transformers-Tutorials - This repository contains demos I made with the Transformers library by HuggingFace.
ML-Workspace - 🛠 All-in-one web-based IDE specialized for machine learning and data science.
Deep-Learning-Experiments - Videos, notes and experiments to understand deep learning