meetups
lang-segment-anything
meetups | lang-segment-anything | |
---|---|---|
7 | 4 | |
3 | 1,233 | |
- | - | |
7.5 | 6.5 | |
about 2 months ago | 7 days ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
meetups
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- FLaNK Stack Weekly for 30 Oct 2023
- meetups/15December2022.md at main Β· tspannhw/meetups
- meetups/14December2022.md at main Β· tspannhw/meetups
-
FLiP Stack Weekly 19-dec-2022
For Pulsar Client C++ release details and downloads, visit: https://archive.apache.org/dist/pulsar/pulsar-client-cpp-3.1.0/
- tspannhw/meetups
lang-segment-anything
-
Show HN: OK-Robot: open, modular home robot framework for pick-and-drop anywhere
User fishbotics already answers a lot of these questions downstream, but just confirming it here as an author of the project/paper:
> - How does it know what objects are? Does it use some sort of realtime object classifier neural net? What limitations are there here?
We use Lang-SAM (https://github.com/luca-medeiros/lang-segment-anything) to do most of this, with CLIP embeddings (https://openai.com/research/clip) doing most of the heavy lifting of connecting image and text. One of the nice properties of using CLIP-like models is that you don't have to specify the classes you may want to query later, you can just come up with them during runtime.
> - Does the robot know when it can't perform a request? I.e. if you ask it to move a large box or very heavy kettlebell?
Nope! As it is right now, the models are very simple and they don't try to do anything fancy. However, that's why we open up our code! So the community can build smarter robots on top of this project that can use even more visual cues about the environment.
> - How well does it do if the object is hidden or obscured? Does it go looking for it? What if it must move another object to get access to the requested one?
It fails when the object is hidden or obscured in the initial scan, but once again we think it could be a great starting point for further research :) One of the nice things, however, is that we take full 3D information in consideration, and so even if some object is visible from only some of the angles, the robot has a chance to find it.
- FLaNK Stack Weekly for 30 Oct 2023
- Language Segment Anything
- Is the Segment Anything Model (SAM) useful for actual object detection?
What are some alternatives?
daath-ai-parser - Daath AI Parser is an open-source application that uses OpenAI to parse visible text of HTML elements.
fastsdcpu - Fast stable diffusion on CPU
jupyter-scheduler - Run Jupyter notebooks as jobs
fact-checker - Fact-checking LLM outputs with self-ask
tiktoken - tiktoken is a fast BPE tokeniser for use with OpenAI's models.
orbital - Orbital automates integration between data sources (APIs, Databases, Queues and Functions). BFF's, API Composition and ETL pipelines that adapt as your specs change.
glow - Render markdown on the CLI, with pizzazz! π π»
fury-benchmarks - Serialization Benchmarks for fury with other libraries
FLiP-Pi-Iceberg-Thermal - Apache Iceberg + Apache Pulsar + Thermal Sensor Data from a Raspberry Pi
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
reflex - πΈοΈ Web apps in pure Python π
FLaNK-Halifax - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data