Our great sponsors
-
table-transformer
Table Transformer (TATR) is a deep learning model for extracting tables from unstructured documents (PDFs and images). This is also the official repository for the PubTables-1M dataset and GriTS evaluation metric.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Hey, I upgraded my GPU from a GTX 980 Ti from 2015 to the RTX 4090 and tested some training to see what gains I got. I am training microsoft/table-transformer for structure recognition on a dataset of around 1M images/annotations with the training configuration used by the authors. Notably the batch_size is only 2. I tested out a few configurations with different batch sizes and got these results: