bert VS simdjson

Compare bert vs simdjson and see what are their differences.

simdjson

Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse, WatermelonDB, Apache Doris, Milvus, StarRocks (by simdjson)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
bert simdjson
49 65
36,992 18,362
1.2% 1.2%
0.0 9.2
16 days ago 16 days ago
Python C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

bert

Posts with mentions or reviews of bert. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • OpenAI – Application for US trademark "GPT" has failed
    1 project | news.ycombinator.com | 15 Feb 2024
    task-specific parameters, and is trained on the downstream tasks by simply fine-tuning all pre-trained parameters.

    [0] https://arxiv.org/abs/1810.04805

  • Integrate LLM Frameworks
    5 projects | dev.to | 10 Dec 2023
    The release of BERT in 2018 kicked off the language model revolution. The Transformers architecture succeeded RNNs and LSTMs to become the architecture of choice. Unbelievable progress was made in a number of areas: summarization, translation, text classification, entity classification and more. 2023 tooks things to another level with the rise of large language models (LLMs). Models with billions of parameters showed an amazing ability to generate coherent dialogue.
  • Embeddings: What they are and why they matter
    9 projects | news.ycombinator.com | 24 Oct 2023
    The general idea is that you have a particular task & dataset, and you optimize these vectors to maximize that task. So the properties of these vectors - what information is retained and what is left out during the 'compression' - are effectively determined by that task.

    In general, the core task for the various "LLM tools" involves prediction of a hidden word, trained on very large quantities of real text - thus also mirroring whatever structure (linguistic, syntactic, semantic, factual, social bias, etc) exists there.

    If you want to see how the sausage is made and look at the actual algorithms, then the key two approaches to read up on would probably be Mikolov's word2vec (https://arxiv.org/abs/1301.3781) with the CBOW (Continuous Bag of Words) and Continuous Skip-Gram Model, which are based on relatively simple math optimization, and then on the BERT (https://arxiv.org/abs/1810.04805) structure which does a conceptually similar thing but with a large neural network that can learn more from the same data. For both of them, you can either read the original papers or look up blog posts or videos that explain them, different people have different preferences on how readable academic papers are.

  • Ernie, China's ChatGPT, Cracks Under Pressure
    1 project | news.ycombinator.com | 7 Sep 2023
  • Ask HN: How to Break into AI Engineering
    2 projects | news.ycombinator.com | 22 Jun 2023
    Could you post a link to "the BERT paper"? I've read some, but would be interested reading anything that anyone considered definitive :) Is it this one? "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" :https://arxiv.org/abs/1810.04805
  • How to leverage the state-of-the-art NLP models in Rust
    3 projects | /r/infinilabs | 7 Jun 2023
    Rust crate rust_bert implementation of the BERT language model (https://arxiv.org/abs/1810.04805 Devlin, Chang, Lee, Toutanova, 2018). The base model is implemented in the bert_model::BertModel struct. Several language model heads have also been implemented, including:
  • Notes on training BERT from scratch on an 8GB consumer GPU
    1 project | news.ycombinator.com | 2 Jun 2023
    The achievement of training a BERT model to 90% of the GLUE score on a single GPU in ~100 hours is indeed impressive. As for the original BERT pretraining run, the paper [1] mentions that the pretraining took 4 days on 16 TPU chips for the BERT-Base model and 4 days on 64 TPU chips for the BERT-Large model.

    Regarding the translation of these techniques to the pretraining phase for a GPT model, it is possible that some of the optimizations and techniques used for BERT could be applied to GPT as well. However, the specific architecture and training objectives of GPT might require different approaches or additional optimizations.

    As for the SOPHIA optimizer, it is designed to improve the training of deep learning models by adaptively adjusting the learning rate and momentum. According to the paper [2], SOPHIA has shown promising results in various deep learning tasks. It is possible that the SOPHIA optimizer could help improve the training of BERT and GPT models, but further research and experimentation would be needed to confirm its effectiveness in these specific cases.

    [1] https://arxiv.org/abs/1810.04805

  • List of AI-Models
    14 projects | /r/GPT_do_dah | 16 May 2023
    Click to Learn more...
  • Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding
    1 project | news.ycombinator.com | 18 Apr 2023
  • Google internally developed chatbots like ChatGPT years ago
    1 project | news.ycombinator.com | 8 Mar 2023

simdjson

Posts with mentions or reviews of simdjson. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-20.
  • Tips on adding JSON output to your command line utility. (2021)
    2 projects | news.ycombinator.com | 20 Apr 2024
    It's also supported by simdjson [0] (which has a lot of language bindings [1]):

    > Multithreaded processing of gigantic Newline-Delimited JSON (ndjson) and related formats at 3.5 GB/s

    [0] https://simdjson.org/

    [0] https://github.com/simdjson/simdjson?tab=readme-ov-file#bind...

  • 1BRC Merykitty's Magic SWAR: 8 Lines of Code Explained in 3k Words
    4 projects | news.ycombinator.com | 9 Mar 2024
  • Training great LLMs from ground zero in the wilderness as a startup
    3 projects | news.ycombinator.com | 6 Mar 2024
  • simdjson: Parsing Gigabytes of JSON per Second
    1 project | news.ycombinator.com | 23 Jan 2024
  • Use any web browser as GUI, with Zig in the back end and HTML5 in the front end
    17 projects | news.ycombinator.com | 1 Jan 2024
    String parsing is negligible compared to the speed of the DOM which is glacially slow: https://news.ycombinator.com/item?id=38835920

    Come on, people, make an effort to learn how insanely fast computers are, and how insanely inefficient our software is.

    String parsing can be done at gigabytes per second: https://github.com/simdjson/simdjson If you think that is the slowest operation in the browser, please find some resources that talk about what is actually happening in the browser?

  • Cray-1 performance vs. modern CPUs
    4 projects | news.ycombinator.com | 25 Dec 2023
    Thanks for all the detailed information! That answers a bunch of my questions and the implementation of strlen is nice.

    The instruction I was thinking of is pshufb. An example ‘weird’ use can be found for detecting white space in simdjson: https://github.com/simdjson/simdjson/blob/24b44309fb52c3e2c5...

    This works as follows:

    1. Observe that each ascii whitespace character ends with a different nibble.

    2. Make some vector of 16 bytes which has the white space character whose final nibble is the index of the byte, or some other character with a different final nibble from the byte (eg first element is space =0x20, next could be eg 0xff but not 0xf1 as that ends in the same nibble as index)

    3. For each block where you want to find white space, compute pcmpeqb(pshufb(whitespace, input), input). The rules of pshufb mean (a) non-ascii (ie bit 7 set) characters go to 0 so will compare false, (b) other characters are replaced with an element of whitespace according to their last nibble so will compare equal only if they are that whitespace character.

    I’m not sure how easy it would be to do such tricks with vgather.vv. In particular, the length of the input doesn’t matter (could be longer) but the length of white space must be 16 bytes. I’m not sure how the whole vlen stuff interacts with tricks like this where you (a) require certain fixed lengths and (b) may have different lengths for tables and input vectors. (and indeed there might just be better ways, eg you could imagine an operation with a 256-bit register where you permute some vector of bytes by sign-extending the nth bit of the 256-bit register into the result where the input byte is n).

  • Codebases to read
    5 projects | /r/cpp | 5 Dec 2023
    Additionally, if you like low level stuff, check out libfmt (https://github.com/fmtlib/fmt) - not a big project, not difficult to understand. Or something like simdjson (https://github.com/simdjson/simdjson).
  • Simdjson: Parsing Gigabytes of JSON per Second
    1 project | news.ycombinator.com | 30 Nov 2023
  • Building a high performance JSON parser
    19 projects | news.ycombinator.com | 5 Nov 2023
    Everything you said is totally reasonable. I'm a big fan of napkin math and theoretical upper bounds on performance.

    simdjson (https://github.com/simdjson/simdjson) claims to fully parse JSON on the order of 3 GB/sec. Which is faster than OP's Go whitespace parsing! These tests are running on different hardware so it's not apples-to-apples.

    The phrase "cannot go faster than this" is just begging for a "well ackshully". Which I hate to do. But the fact that there is an existence proof of Problem A running faster in C++ SIMD than OP's Probably B scalar Go is quite interesting and worth calling out imho. But I admit it doesn't change the rest of the post.

  • New package : lspce - a simple LSP Client for Emacs
    4 projects | /r/emacs | 30 Jun 2023
    I have same question as /u/JDRiverRun : how do you deal with JSON, do you parse json on Rust side or on Emacs side. I see that you are requiring json.el in your lspce.el, but I haven't looked through entire file carefully. If you parse on Rust side, do you use simdjson (there are at least two Rust bindings to it)? If yes, what are your impressions, experiences compared to more "standard" json library?

What are some alternatives?

When comparing bert and simdjson you can also consider the following projects:

NLTK - NLTK Source

RapidJSON - A fast JSON parser/generator for C++ with both SAX/DOM style API

bert-sklearn - a sklearn wrapper for Google's BERT model

jsoniter - jsoniter (json-iterator) is fast and flexible JSON parser available in Java and Go

pysimilar - A python library for computing the similarity between two strings (text) based on cosine similarity

json - JSON for Modern C++

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

json-schema-validator - JSON schema validator for JSON for Modern C++

NL_Parser_using_Spacy - NLP parser using NER and TDD

JsonCpp - A C++ library for interacting with JSON.

PURE - [NAACL 2021] A Frustratingly Easy Approach for Entity and Relation Extraction https://arxiv.org/abs/2010.12812

json - A C++11 library for parsing and serializing JSON to and from a DOM container in memory.