Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming Large Language Models"
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
JPEGDEC seems to be about 500 lines: https://github.com/bitbank2/JPEGDEC
Candle already exists[1], and it runs pretty well. Can use both CUDA and Metal backends (or just plain-old CPU).
[1] https://github.com/huggingface/candle
I'd like to think he took the name from my llm.f90 project https://github.com/rbitr/llm.f90
It was originally based off of Karpathy's llama2.c but I renamed it when I added support for other architectures.
Probable a coincidence :)
Yes general LLM models can be used for time series forecasting:
https://github.com/KimMeen/Time-LLM
Check out Hidet [1]. Not as well funded, but delivers Python based ML acceleration with GPU support (unlike Mojo).
[1] https://github.com/hidet-org/hidet
I'm the creator behind https://github.com/nlpodyssey/rwkv.f90. How about joining forces?
Most modern A/V codecs won't fit in that limit by several orders of magnitude.
Even standard-compliant JPEG decoder would be hard to squeeze without some serious codegolfing. Discarding some barely used features gets you close to that limit, though [1].
Smallest popular TCP/IP stack [2] is ~20kLoC.
[1] https://github.com/richgel999/picojpeg
[2] https://savannah.nongnu.org/projects/lwip/