model-optimization VS Pretrained-Language-Model

Compare model-optimization vs Pretrained-Language-Model and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
model-optimization Pretrained-Language-Model
1 1
1,470 2,960
0.8% 0.5%
6.8 6.1
11 days ago 4 months ago
Python Python
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

model-optimization

Posts with mentions or reviews of model-optimization. We have used some of these posts to build our list of alternatives and similar projects.
  • Need Help With Pruning Model Weights in Tensorflow 2
    1 project | /r/tensorflow | 7 Jun 2021
    I have been following the example shown here, and so far I've had mixed results and wanted to ask for some help because the resources I've found online have not been able to answer some of my questions (perhaps because some of these are obvious and I am just being dumb).

Pretrained-Language-Model

Posts with mentions or reviews of Pretrained-Language-Model. We have used some of these posts to build our list of alternatives and similar projects.

What are some alternatives?

When comparing model-optimization and Pretrained-Language-Model you can also consider the following projects:

deepsparse - Sparsity-aware deep learning inference runtime for CPUs

SqueezeLLM - [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization

qkeras - QKeras: a quantization deep learning library for Tensorflow Keras

DWPose - "Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)

sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

Torch-Pruning - [CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs

3d-model-convert-to-gltf - Convert 3d model (STL/IGES/STEP/OBJ/FBX) to gltf and compression

Efficient-AI-Backbones - Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.

aimet - AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

PaddleClas - A treasure chest for visual classification and recognition powered by PaddlePaddle

larq - An Open-Source Library for Training Binarized Neural Networks

Lion - Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"