sample-spring-boot-on-kubernetes
LAVIS
sample-spring-boot-on-kubernetes | LAVIS | |
---|---|---|
1 | 18 | |
76 | 8,838 | |
- | 3.5% | |
8.1 | 6.3 | |
4 days ago | 28 days ago | |
Java | Jupyter Notebook | |
- | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sample-spring-boot-on-kubernetes
LAVIS
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
-
[D] Why is most Open Source AI happening outside the USA?
For multimodal, there's China (*many), then Salesforce.
-
Need help for a colab notebook running Lavis blip2_instruct_vicuna13b?
Been trying for all day to get a working inference for this example: https://github.com/salesforce/LAVIS/tree/main/projects/instructblip
-
most sane web3 job listing
There's also been big breakthroughs in computer vision. Not that long ago it was hard to recognize if a photo contained a bird; that's solved now by models like CLIP, Yolo, or Segment Anything. Now research has moved on to generating 3D scenes from images or interactively answering questions about images.
-
I work at a non-tech company and have been asked to make software that is impossible. How do I explain it to my boss?
The new hotness is multimodal vision-language models like InstructBLIP that can interactively answer questions about images. Check out the examples in the github repo, I would not have thought this was possible a few years ago.
-
Two-minute Daily AI Update (Date: 5/15/2023)
Salesforce’s BLIP family has a new member– InstructBLIP, a vision-language instruction-tuning framework using BLIP-2 models. It has achieved state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks, substantially outperforming BLIP-2 and Flamingo. (Source)
-
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Github
-
Can I use my own art as a training set?
Most of my workflows are self-made. For captioning I used Blip-2 in a custom script I made that automates the process by going into directories and their sub-directories and creates a .txt file beside each image. This way I can keep my images organized in their proper directories, without having to put dump them all in a single place.
- FLiP Stack Weekly for 13-Feb-2023
What are some alternatives?
causalai - Salesforce CausalAI Library: A Fast and Scalable framework for Causal Analysis of Time Series and Tabular Data
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
naturalspeech - A fully working pytorch implementation of NaturalSpeech (Tan et al., 2022)
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
java-spring-security-example - Spring Security Sample Project
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
linkis - Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
robo-vln - Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
shfmt - Dockernized shfmt. This formats shell script.
DeepViewAgg - [CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
Flow-SGP30-MLX90640 - Pimoroni Breakout Garden - SGP30 - MLX90650 - RP400 - NiFi - Kafka - Pulsar - Flink - Spark - Iceberg