lang-segment-anything
fury-benchmarks
lang-segment-anything | fury-benchmarks | |
---|---|---|
4 | 4 | |
1,212 | 2 | |
- | - | |
6.5 | 5.9 | |
about 1 month ago | 13 days ago | |
Jupyter Notebook | Java | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lang-segment-anything
-
Show HN: OK-Robot: open, modular home robot framework for pick-and-drop anywhere
User fishbotics already answers a lot of these questions downstream, but just confirming it here as an author of the project/paper:
> - How does it know what objects are? Does it use some sort of realtime object classifier neural net? What limitations are there here?
We use Lang-SAM (https://github.com/luca-medeiros/lang-segment-anything) to do most of this, with CLIP embeddings (https://openai.com/research/clip) doing most of the heavy lifting of connecting image and text. One of the nice properties of using CLIP-like models is that you don't have to specify the classes you may want to query later, you can just come up with them during runtime.
> - Does the robot know when it can't perform a request? I.e. if you ask it to move a large box or very heavy kettlebell?
Nope! As it is right now, the models are very simple and they don't try to do anything fancy. However, that's why we open up our code! So the community can build smarter robots on top of this project that can use even more visual cues about the environment.
> - How well does it do if the object is hidden or obscured? Does it go looking for it? What if it must move another object to get access to the requested one?
It fails when the object is hidden or obscured in the initial scan, but once again we think it could be a great starting point for further research :) One of the nice things, however, is that we take full 3D information in consideration, and so even if some object is visible from only some of the angles, the robot has a chance to find it.
- FLaNK Stack Weekly for 30 Oct 2023
- Language Segment Anything
- Is the Segment Anything Model (SAM) useful for actual object detection?
fury-benchmarks
- FLaNK Stack Weekly for 20 Nov 2023
- FLaNK Stack Weekly for 30 Oct 2023
-
Fury: 170x faster than JDK, fast serialization powered by JIT and Zero-copy
1) Fury is 41.6x faster than jackson for Struct serialization 2) Fury is 65.6x faster than jackson for Struct deserialization 3) Fury is 9.4x faster than jackson for MediaContent serialization 4) Fury is 9.6x faster than jackson for MediaContent deserialization
see https://github.com/chaokunyang/fury-benchmarks for detailed benchmark code.
What are some alternatives?
meetups - Meetup Materials
jvm-serializers - Benchmark comparing serialization libraries on the JVM
fastsdcpu - Fast stable diffusion on CPU
MemoryPack - Zero encoding extreme performance binary serializer for C# and Unity.
fact-checker - Fact-checking LLM outputs with self-ask
MessagePack for C# (.NET, .NET Core, Unity, Xamarin) - Extremely Fast MessagePack Serializer for C#(.NET, .NET Core, Unity, Xamarin). / msgpack.org[C#]
orbital - Orbital automates integration between data sources (APIs, Databases, Queues and Functions). BFF's, API Composition and ETL pipelines that adapt as your specs change.
grpc-dotnet - gRPC for .NET
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
incubator-fury - A blazingly fast multi-language serialization framework powered by JIT and zero-copy.
FLaNK-Halifax - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data