bert VS jax

Compare bert vs jax and see what are their differences.

jax

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more (by google)
Jax
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
bert jax
49 82
37,036 28,004
0.6% 1.5%
0.0 10.0
24 days ago 1 day ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

bert

Posts with mentions or reviews of bert. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • OpenAI – Application for US trademark "GPT" has failed
    1 project | news.ycombinator.com | 15 Feb 2024
    task-specific parameters, and is trained on the downstream tasks by simply fine-tuning all pre-trained parameters.

    [0] https://arxiv.org/abs/1810.04805

  • Integrate LLM Frameworks
    5 projects | dev.to | 10 Dec 2023
    The release of BERT in 2018 kicked off the language model revolution. The Transformers architecture succeeded RNNs and LSTMs to become the architecture of choice. Unbelievable progress was made in a number of areas: summarization, translation, text classification, entity classification and more. 2023 tooks things to another level with the rise of large language models (LLMs). Models with billions of parameters showed an amazing ability to generate coherent dialogue.
  • Embeddings: What they are and why they matter
    9 projects | news.ycombinator.com | 24 Oct 2023
    The general idea is that you have a particular task & dataset, and you optimize these vectors to maximize that task. So the properties of these vectors - what information is retained and what is left out during the 'compression' - are effectively determined by that task.

    In general, the core task for the various "LLM tools" involves prediction of a hidden word, trained on very large quantities of real text - thus also mirroring whatever structure (linguistic, syntactic, semantic, factual, social bias, etc) exists there.

    If you want to see how the sausage is made and look at the actual algorithms, then the key two approaches to read up on would probably be Mikolov's word2vec (https://arxiv.org/abs/1301.3781) with the CBOW (Continuous Bag of Words) and Continuous Skip-Gram Model, which are based on relatively simple math optimization, and then on the BERT (https://arxiv.org/abs/1810.04805) structure which does a conceptually similar thing but with a large neural network that can learn more from the same data. For both of them, you can either read the original papers or look up blog posts or videos that explain them, different people have different preferences on how readable academic papers are.

  • Ernie, China's ChatGPT, Cracks Under Pressure
    1 project | news.ycombinator.com | 7 Sep 2023
  • Ask HN: How to Break into AI Engineering
    2 projects | news.ycombinator.com | 22 Jun 2023
    Could you post a link to "the BERT paper"? I've read some, but would be interested reading anything that anyone considered definitive :) Is it this one? "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" :https://arxiv.org/abs/1810.04805
  • How to leverage the state-of-the-art NLP models in Rust
    3 projects | /r/infinilabs | 7 Jun 2023
    Rust crate rust_bert implementation of the BERT language model (https://arxiv.org/abs/1810.04805 Devlin, Chang, Lee, Toutanova, 2018). The base model is implemented in the bert_model::BertModel struct. Several language model heads have also been implemented, including:
  • Notes on training BERT from scratch on an 8GB consumer GPU
    1 project | news.ycombinator.com | 2 Jun 2023
    The achievement of training a BERT model to 90% of the GLUE score on a single GPU in ~100 hours is indeed impressive. As for the original BERT pretraining run, the paper [1] mentions that the pretraining took 4 days on 16 TPU chips for the BERT-Base model and 4 days on 64 TPU chips for the BERT-Large model.

    Regarding the translation of these techniques to the pretraining phase for a GPT model, it is possible that some of the optimizations and techniques used for BERT could be applied to GPT as well. However, the specific architecture and training objectives of GPT might require different approaches or additional optimizations.

    As for the SOPHIA optimizer, it is designed to improve the training of deep learning models by adaptively adjusting the learning rate and momentum. According to the paper [2], SOPHIA has shown promising results in various deep learning tasks. It is possible that the SOPHIA optimizer could help improve the training of BERT and GPT models, but further research and experimentation would be needed to confirm its effectiveness in these specific cases.

    [1] https://arxiv.org/abs/1810.04805

  • List of AI-Models
    14 projects | /r/GPT_do_dah | 16 May 2023
    Click to Learn more...
  • Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding
    1 project | news.ycombinator.com | 18 Apr 2023
  • Google internally developed chatbots like ChatGPT years ago
    1 project | news.ycombinator.com | 8 Mar 2023

jax

Posts with mentions or reviews of jax. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-22.
  • The Elements of Differentiable Programming
    5 projects | news.ycombinator.com | 22 Mar 2024
    The dual numbers exist just as surely as the real numbers and have been used well over 100 years

    https://en.m.wikipedia.org/wiki/Dual_number

    Pytorch has had them for many years.

    https://pytorch.org/docs/stable/generated/torch.autograd.for...

    JAX implements them and uses them exactly as stated in this thread.

    https://github.com/google/jax/discussions/10157#discussionco...

    As you so eloquently stated, "you shouldn't be proclaiming things you don't actually know on a public forum," and doubly so when your claimed "corrections" are so demonstrably and totally incorrect.

  • Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch
    6 projects | news.ycombinator.com | 23 Dec 2023
    On your last point, as long as you jit the topmost level, it doesn't matter whether or not you have inner jitted functions. The end result should be the same.

    Source: https://github.com/google/jax/discussions/5199#discussioncom...

  • Apple releases MLX for Apple Silicon
    4 projects | /r/LocalLLaMA | 8 Dec 2023
    The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.
  • MLPerf training tests put Nvidia ahead, Intel close, and Google well behind
    1 project | news.ycombinator.com | 14 Nov 2023
    I'm still not totally sure what the issue is. Jax uses program transformations to compile programs to run on a variety of hardware, for example, using XLA for TPUs. It can also run cuda ops for Nvidia gpus without issue: https://jax.readthedocs.io/en/latest/installation.html

    There is also support for custom cpp and cuda ops if that's what is needed: https://jax.readthedocs.io/en/latest/Custom_Operation_for_GP...

    I haven't worked with float4, but can imagine that new numerical types would require some special handling. But I assume that's the case for any ml environment.

    But really you probably mean fixed point 4bit integer types? Looks like that has had at least some work done in Jax: https://github.com/google/jax/issues/8566

  • MatX: Efficient C++17 GPU numerical computing library with Python-like syntax
    5 projects | news.ycombinator.com | 3 Oct 2023
    >

    Are they even comparing apples to apples to claim that they see these improvements over NumPy?

    > While the code complexity and length are roughly the same, the MatX version shows a 2100x over the Numpy version, and over 4x faster than the CuPy version on the same GPU.

    NumPy doesn't use GPU by default unless you use something like Jax [1] to compile NumPy code to run on GPUs. I think more honest comparison will mainly compare MatX running on same CPU like NumPy as focus the GPU comparison against CuPy.

    [1] https://github.com/google/jax

  • JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
    12 projects | news.ycombinator.com | 28 Sep 2023
    Actually that never changed. The README has always had an example of differentiating through native Python control flow:

    https://github.com/google/jax/commit/948a8db0adf233f333f3e5f...

    The constraints on control flow expressions come from jax.jit (because Python control flow can't be staged out) and jax.vmap (because we can't take multiple branches of Python control flow, which we might need to do for different batch elements). But autodiff of Python-native control flow works fine!

  • Julia and Mojo (Modular) Mandelbrot Benchmark
    10 projects | news.ycombinator.com | 8 Sep 2023
    For a similar "benchmark" (also Mandelbrot) but took place in Jax repo discussion: https://github.com/google/jax/discussions/11078#discussionco...
  • Functional Programming 1
    3 projects | news.ycombinator.com | 16 Aug 2023
    2. https://github.com/fantasyland/fantasy-land (A bit heavy on jargon)

    Note there is a python version of Ramda available on pypi and there’s a lot of FP tidbits inside JAX:

    3. https://pypi.org/project/ramda/ (Worth making your own version if you want to learn, though)

    4. For nested data, JAX tree_util is epic: https://jax.readthedocs.io/en/latest/jax.tree_util.html and also their curry implementation is funny: https://github.com/google/jax/blob/4ac2bdc2b1d71ec0010412a32...

    Anyway don’t put FP on a pedestal, main thing is to focus on the core principles of avoiding external mutation and making helper functions. Doesn’t always work because some languages like Rust don’t have legit support for currying (afaik in 2023 August), but in those cases you can hack it with builder methods to an extent.

    Finally, if you want to understand the middle of the midwit meme, check out this wiki article and connect the free monoid to the Kleene star (0 or more copies of your pattern) and Kleene plus (1 or more copies of your pattern). Those are also in regex so it can help you remember the regex symbols. https://en.wikipedia.org/wiki/Free_monoid?wprov=sfti1

    The simplest example might be {0}^* in which case

    0: “” // because we use *

  • Best Way to Learn JAX
    1 project | /r/learnmachinelearning | 13 May 2023
    Hello! I'm trying to learn JAX over the next couple of weeks. Ideally, I want to be comfortable with using it for projects after about 3 weeks to a month, although I understand that may not be realistic. I currently have experience with PyTorch and TensorFlow. How should I go about learning JAX? Is there a specific YouTube tutorial or online course I should use, or should I just use the tutorial on https://jax.readthedocs.io/? Any information, advice, or experience you can share would be much appreciated!
  • Codon: Python Compiler
    9 projects | news.ycombinator.com | 8 May 2023

What are some alternatives?

When comparing bert and jax you can also consider the following projects:

NLTK - NLTK Source

Numba - NumPy aware dynamic Python compiler using LLVM

bert-sklearn - a sklearn wrapper for Google's BERT model

functorch - functorch is JAX-like composable function transforms for PyTorch.

pysimilar - A python library for computing the similarity between two strings (text) based on cosine similarity

julia - The Julia Programming Language

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

PURE - [NAACL 2021] A Frustratingly Easy Approach for Entity and Relation Extraction https://arxiv.org/abs/2010.12812

Cython - The most widely used Python to C compiler

NL_Parser_using_Spacy - NLP parser using NER and TDD

jax-windows-builder - A community supported Windows build for jax.