nobuco VS exporters

Compare nobuco vs exporters and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
nobuco exporters
1 3
192 537
- 2.4%
8.7 7.1
16 days ago 6 months ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

nobuco

Posts with mentions or reviews of nobuco. We have used some of these posts to build our list of alternatives and similar projects.
  • Introducing Nobuco: PyTorch to Tensorflow converter. Intuitive, flexible, efficient.
    1 project | /r/pytorch | 2 Jul 2023
    Hence, [Nobuco](https://github.com/AlexanderLutsenko/nobuco). It's designed with simplicity and hackability in mind and automatically solves mismatching channel orders offering near-optimal performance. Try it, and spread the word if you like it. If not, feel free to open an issue or feature request. Any forms of contribution are highly welcome!

exporters

Posts with mentions or reviews of exporters. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-07.
  • I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
    12 projects | news.ycombinator.com | 7 Jan 2024
    Conceptually, to the best of my understanding, nothing too serious; perhaps the inefficiency of processing a larger input than necessary?

    Practically, a few things:

    If you want to have your cake & eat it too, they recommend Enumerated Shapes[1] in their coremltools docs, where CoreML precompiles up to 128 (!) variants of input shapes, but again this is fairly limiting (1 tok, 2 tok, 3 tok... up to 128 token prompts.. maybe you enforce a minimum, say 80 tokens to account for a system prompt, so up to 200 tokens, but... still pretty short). But this is only compatible with CPU inference, so that reduces its appeal.

    It seems like its current state was designed for text embedding models, where you normalize input length by chunking (often 128 or 256 tokens) and operate on the chunks — and indeed, that’s the only text-based CoreML model that Apple ships today, a Bert embedding model tuned for Q&A[2], not an LLM.

    You could used a fixed input length that’s fairly large; I haven’t experimented with it once I grasped the memory requirements, but from what I gather from HuggingFace’s announcement blog post[3], it seems that is what they do with swift-transformers & their CoreML conversions, handling the details for you[4][5]. I haven’t carefully investigated the implementation, but I’m curious to learn more!

    You can be sure that no one is more aware of all this than Apple — they published "Deploying Transformers on the Apple Neural Engine" in June 2022[6]. I look forward to seeing what they cook up for developers at WWDC this year!

    ---

    [1] "Use `EnumeratedShapes` for best performance. During compilation the model can be optimized on the device for the finite set of input shapes. You can provide up to 128 different shapes." https://apple.github.io/coremltools/docs-guides/source/flexi...

    [2] BertSQUAD.mlmodel (fp16) https://developer.apple.com/machine-learning/models/#text

    [3] https://huggingface.co/blog/swift-coreml-llm#optimization

    [4] `use_fixed_shapes` "Retrieve the max sequence length from the model configuration, or use a hardcoded value (currently 128). This can be subclassed to support custom lengths." https://github.com/huggingface/exporters/pull/37/files#diff-...

    [5] `use_flexible_shapes` "When True, inputs are allowed to use sequence lengths of `1` up to `maxSequenceLength`. Unfortunately, this currently prevents the model from running on GPU or the Neural Engine. We default to `False`, but this can be overridden in custom configurations." https://github.com/huggingface/exporters/pull/37/files#diff-...

    [6] https://machinelearning.apple.com/research/neural-engine-tra...

  • [P] Deploying Transformers with Apple's Core ML
    1 project | /r/MachineLearning | 1 Sep 2022
    Give it a try and leave a ⭐️ if you find it useful 👉: https://github.com/huggingface/exporters

What are some alternatives?

When comparing nobuco and exporters you can also consider the following projects:

PINTO_model_zoo - A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.

yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

efficientnet-lite-keras - Keras reimplementation of EfficientNet Lite.

mlx - MLX: An array framework for Apple silicon

d2l-en - Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.

Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.

ftc_ladel - A TFLite+YoloV7 enabled labeling and training pipeline

best-of-ml-python - 🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.

coremltools - Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

nsfw-classification-tensorflow - NSFW classify model implemented with tensorflow.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.