replicate-javascript
MiDaS
replicate-javascript | MiDaS | |
---|---|---|
8 | 27 | |
426 | 4,193 | |
4.7% | 2.5% | |
8.9 | 2.4 | |
3 days ago | 4 months ago | |
TypeScript | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
replicate-javascript
-
Ask HN: Who is hiring? (June 2024)
Replicate (YC W20) | San Francisco, CA + Remote | https://replicate.com/
Replicate makes it easy to run AI in the cloud. You can run a big library of open source models with a few lines of code, or deploy your own models at scale.
We're an experienced team from Spotify, Docker, GitHub, Heroku, Apple, and various other places. We're backed by a16z, Sequoia, Andrej Karpathy, Dylan Field, Guillermo Rauch.
We're hiring:
- An infrastructure engineer
- An expert at deploying and optimizing language models
- An engineer who is good at humans to look after our customers
... and more: https://replicate.com/about#join-us
Email us: [email protected]
-
Building a Retrieval-Augmented Generation Chatbot with SvelteKit and Xata Vector Search
import { experimental_buildLlama2Prompt } from 'ai/prompts'; // Now use Replicate LLAMA 70B streaming to perform the autocompletion with context const response = await replicate.predictions.create({ // You must enable streaming. stream: true, // The model must support streaming. See https://replicate.com/docs/streaming model: 'meta/llama-2-70b-chat', // Format the message list into the format expected by Llama 2 // @see https://github.com/vercel/ai/blob/99cf16edf0a09405d15d3867f997c96a8da869c6/packages/core/prompts/huggingface.ts#L53C1-L78C2 input: { prompt: experimental_buildLlama2Prompt([ { // create a system content message to be added as // the llama2prompt generator will supply it as the context with the API role: 'system', content: systemContext }, { // create a system instruction // make sure to wrap code blocks with ``` {% endraw %} so that the svelte markdown picks it up correctly role: 'assistant', content: {% raw %}`When creating repsonses sure to wrap any code blocks that you output as code blocks and not text so that they can be rendered beautifully.`{% endraw %} }, // also, pass the whole conversation! ...messages ]) } }); {% raw %}
-
Wasp x Supabase: Smokin’ Hot Full-Stack Combo 🌶️ 🔥
We used Replicate to run the models and the cost so far is 26 cents for 90 cards — which means it’s less than a third of a cent per card!
-
Tap into 17 LLMs with a Single API – Free with Unlimited Tokens
Basically https://replicate.com/
Because it happens when running your own models on localhost too. I have ollama and all the ones they support, but there are some on HuggingFace I run through llama.cpp inside apps where I won't have ollama installed, Replicate also has Stable Diffusion models, not just chat ones, and OpenAI which is its own thing. So it could potentially all be unified under a provider like that.
Haven't actually tried Replicate because I'm just running locally for free, but probably would try to find a single cloud provider for all deployments, like a Heroku of LLMs.
-
SB-1047 will stifle open-source AI and decrease safety
It's very easy to get started, right in your Terminal, no fees! No credit card at all.
And there are cloud providers like https://replicate.com/ and https://lightning.ai/ that will let you use your LLM via an API key just like you did with OpenAI if you need that.
You don't need OpenAI - nobody does.
-
How to Estimate Depth from a Single Image
In this section, we’ll show you how to generate MDE depth map predictions with both DPT and Marigold. In both cases, you can optionally run the model locally with the respective Hugging Face library, or run remotely with Replicate.
-
Building a self-creating website with Supabase and AI
Built with Supabase, Astro, Unreal Speech, Stable Diffusion, Replicate, Metropolitan Museum of Art
-
From Chaos to Clarity with AI-driven Categorization
Now that we understand the process, let’s take a look at the actual code. The first step is simply importing our dependencies. Note that we will be using the replicate npm package, which you can install with npm i replicate.
MiDaS
-
How to Estimate Depth from a Single Image
The checkpoint below uses MiDaS, which returns the inverse depth map, so we have to invert it back to get a comparable depth map.
-
Distance estimation from monocular vision using deep learning
Hi, I have made use of the KITTI dataset for this, and yes it depends on objects of know sizes. Here I have defined the following classes: Car, Van, Truck, Pedestrian, Person_sitting, Cyclist, Tram, Misc, or DontCare and the predictions are pretty accurate for those classes. Even if it's not the same class, it still recognizes the object since I have made use of the coco names dataset here and that is used along with YOLO for object detection. And there are several already implemented projects that make use of deep learning models trained on 2D datasets to predict 3D distance. This was one of my inspirations for this project: https://blogs.nvidia.com/blog/2019/06/19/drive-labs-distance-to-object-detection/ Furthermore, there are well-documented and researched papers like DistYOLO or MiDaS that makes use of deep learning for depth estimation
-
OMPR V0.6.10 update
-Added AI image depth generator Create your own depth map image at a click of a button. Using the awesome MIDAS3.1 https://github.com/isl-org/MiDaS as the backend and the model "dpt_beit_large_512" for the highest quality depth map. Video and GIF depth map generators coming out next together with the Depth movie player feature.
-
AI that converts a regular 2d image to stereoscopic
It uses MiDaS. That extension may be the most accessible way to use it at home. IDK.
-
Idea: training on magiceye images
Here's the project homepage https://github.com/isl-org/MiDaS
-
MiDaS v3_1 and DiscoDiffusion
The problem came up after MiDaS updated to version V3_1 on Dec 24th. Although the fix works fine, with the new version there are many changes, which for me produces slightly different results. I would like to able to produce results like before. I still clone the MiDaS repo, but then set it back to the last commit before the changes in december, which is 66882994a432727317267145dc3c2e47ec78c38a.
-
File not found error
try: from midas.dpt_depth import DPTDepthModel except: if not os.path.exists('MiDaS'): gitclone("https://github.com/isl-org/MiDaS.git") gitclone("https://github.com/bytedance/Next-ViT.git", f'{PROJECT_DIR}/externals/Next_ViT') if not os.path.exists('MiDaS/midas_utils.py'): shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py') if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'): wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path) sys.path.append(f'{PROJECT_DIR}/MiDaS')
-
A quick demo to show how structurally coherent depth2img is compared to img2img using Automatic1111.
Cool. The repo for MiDaS is here. https://github.com/isl-org/MiDaS You can see that they partially trained the model on 3D movies Here's a list of the movies that were used to train it. I wonder if they'll be training a MiDaS v 4.0 as things have moved on quite a bit since it was released in Apr 2021?
-
Boosting Monocular Depth repo
We present a stand-alone implementation of our Merging Operator. This new repo allows using any pair of monocular depth estimations in our double estimation. This includes using separate networks for base and high-res estimations, using networks not supported by this repo (such as Midas-v3), or using manually edited depth maps for artistic use. This will also be useful for scientists developing CNN-based MDE as a way to quickly apply double estimation to their own network. For more details please take a look here.
-
DepthViewer is now live on Steam :)
I'll make the feature to export only the depthmap .png file. If you need the depthmap .png right now you can use the MiDaS python script.
What are some alternatives?
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
DenseDepth - High Quality Monocular Depth Estimation via Transfer Learning
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
deeplearning4j-examples - Deeplearning4j Examples (DL4J, DL4J Spark, DataVec) [Moved to: https://github.com/deeplearning4j/deeplearning4j-examples]
DiverseDepth - The code and data of DiverseDepth
U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."
Insta-DM - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
consistent_depth - We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.
Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.
GotoBrowser - Android Browser for KanColle 2nd Phase (HTML5)
laplamgor
multi-subject-render - Generate multiple complex subjects all at once!