Efficiently reading utf-8 chars from a large file: How to improve, test and benchmark my implementation ?

This page summarizes the projects mentioned and recommended in the original post on /r/rust

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • crates.io

    The Rust package registry

  • Hello everyone, I'm a fairly inexperienced developer and I recently wanted to read a large one-liner file without loading the whole file into memory, but still use some buffering. I looked a bit on crates.io but I did not found what I wanted. I found this question on here, mentioning the utf-8 crate but it seems unmaintained and poorly documented. I tried to dive into the code, but I figured it would be faster than just make a clean version myself.

  • rust-utf8

    Discontinued Incremental, zero-copy UTF-8 decoding for Rust

  • My issue with this is that I would like to return the reference to the slice of data read. utf-8::next_strict from the utf-8 crate does this by calling consume if needed before fill_buf, assuming that a following call will call consume afterward (explanations from fill_buff's doc if that's useful). But I don't want to assume this, so it seems the best I can do is to clone into a provided buffer like io::BufRead::read_line.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • cargo-fuzz

    Command line helpers for fuzzing

  • Check out https://rust-fuzz.github.io/book/cargo-fuzz.html

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts