Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
libcurl
A command line tool and library for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. libcurl offers a myriad of powerful features
-
libwww
Libwww is a highly modular, general-purpose client side Web API written in C for Unix and Windows (Win32). It's well suited for both small and large applications, like browser/editors, robots, batch tools, etc. Pluggable modules provided with libwww include complete HTTP/1.1 (with caching, pipelining, PUT, POST, Digest Authentication, deflate, etc), MySQL logging, FTP, HTML/4, XML (expat), RDF (SiRPAC), WebDAV, and much more. The purpose of libwww is to serve as a testbed for protocol experiment
For a command line web browser that doesn't compromise on modern web features, I use browsh[0] especially in low-bandwidth situations. In conjunction with tmux and mosh it's just fantastic because you get the full colors and features of the web with minimal bandwidth.
[0] https://www.brow.sh/
https://github.com/google/oss-fuzz-vulns/tree/main/vulns/cur...
https://github.com/curl/curl/commit/68ffe6c17d6e44b459d60805...
https://www.cvedetails.com/product/25084/Haxx-Curl.html?vend...
Instead of only "thinking a lot about text-based browsers", I have been actively using them on a daily basis for the past 26 years.
Make of this what you will as I am a dumb end user not a genius "developer". I am glad that Links does not use libcurl and that it has its own "bespoke" HTML rendering. In all this time, I still have yet to see any other program produce better rendering of HTML tables as text. I have had few if any problems with Links versions. I am quite good at "breaking" software and for me Links has been quite robust. The source code is readable for me and I have been able to change or "fix" things I do not like, then quickly recompile. Recently I fixed a version of the program so that a certain semantic link would not be shown in Wikipedia pages. No "browser extension" required.
Links' rendering has managed to keep up with the evolution of HTML and web design sufficiently for me. Despite the enormous variation in HTML acrosse the www, there are very few cases where the rendering is unsatisfactory.^1 I cannot say the same for other attempts at text-only clients. W3C's libwww-based line-mode browser still compiles and works,^2 although I would not be satisifed with its rendering. Nor would I be satisfied with edbrowse, or something simpler such as mynx.^3
I use Links primarily for reading and printing HTML. I use a variety of TCP clients for making HTTP requests, including djb's tcpclient which I am quite sure beats libcurl any day of the week in terms quality, e.g., the programming skill level of the author and the care with which it was written. This non-libcurl networking code is relatively small and does not need oss-fuzz. I do not intentionally use libcurl. It is too large and complex for my tastes. For TLS, I mainly use stunnel and haproxy.
1. One rare example I can recall is https://archive.is
2. https://github.com/w3c/libwww
3. https://github.com/SirWumpus/ioccc-mynx
https://github.com/google/oss-fuzz-vulns/tree/main/vulns/cur...
https://github.com/curl/curl/commit/68ffe6c17d6e44b459d60805...
https://www.cvedetails.com/product/25084/Haxx-Curl.html?vend...
Instead of only "thinking a lot about text-based browsers", I have been actively using them on a daily basis for the past 26 years.
Make of this what you will as I am a dumb end user not a genius "developer". I am glad that Links does not use libcurl and that it has its own "bespoke" HTML rendering. In all this time, I still have yet to see any other program produce better rendering of HTML tables as text. I have had few if any problems with Links versions. I am quite good at "breaking" software and for me Links has been quite robust. The source code is readable for me and I have been able to change or "fix" things I do not like, then quickly recompile. Recently I fixed a version of the program so that a certain semantic link would not be shown in Wikipedia pages. No "browser extension" required.
Links' rendering has managed to keep up with the evolution of HTML and web design sufficiently for me. Despite the enormous variation in HTML acrosse the www, there are very few cases where the rendering is unsatisfactory.^1 I cannot say the same for other attempts at text-only clients. W3C's libwww-based line-mode browser still compiles and works,^2 although I would not be satisifed with its rendering. Nor would I be satisfied with edbrowse, or something simpler such as mynx.^3
I use Links primarily for reading and printing HTML. I use a variety of TCP clients for making HTTP requests, including djb's tcpclient which I am quite sure beats libcurl any day of the week in terms quality, e.g., the programming skill level of the author and the care with which it was written. This non-libcurl networking code is relatively small and does not need oss-fuzz. I do not intentionally use libcurl. It is too large and complex for my tastes. For TLS, I mainly use stunnel and haproxy.
1. One rare example I can recall is https://archive.is
2. https://github.com/w3c/libwww
3. https://github.com/SirWumpus/ioccc-mynx
https://github.com/google/oss-fuzz-vulns/tree/main/vulns/cur...
https://github.com/curl/curl/commit/68ffe6c17d6e44b459d60805...
https://www.cvedetails.com/product/25084/Haxx-Curl.html?vend...
Instead of only "thinking a lot about text-based browsers", I have been actively using them on a daily basis for the past 26 years.
Make of this what you will as I am a dumb end user not a genius "developer". I am glad that Links does not use libcurl and that it has its own "bespoke" HTML rendering. In all this time, I still have yet to see any other program produce better rendering of HTML tables as text. I have had few if any problems with Links versions. I am quite good at "breaking" software and for me Links has been quite robust. The source code is readable for me and I have been able to change or "fix" things I do not like, then quickly recompile. Recently I fixed a version of the program so that a certain semantic link would not be shown in Wikipedia pages. No "browser extension" required.
Links' rendering has managed to keep up with the evolution of HTML and web design sufficiently for me. Despite the enormous variation in HTML acrosse the www, there are very few cases where the rendering is unsatisfactory.^1 I cannot say the same for other attempts at text-only clients. W3C's libwww-based line-mode browser still compiles and works,^2 although I would not be satisifed with its rendering. Nor would I be satisfied with edbrowse, or something simpler such as mynx.^3
I use Links primarily for reading and printing HTML. I use a variety of TCP clients for making HTTP requests, including djb's tcpclient which I am quite sure beats libcurl any day of the week in terms quality, e.g., the programming skill level of the author and the care with which it was written. This non-libcurl networking code is relatively small and does not need oss-fuzz. I do not intentionally use libcurl. It is too large and complex for my tastes. For TLS, I mainly use stunnel and haproxy.
1. One rare example I can recall is https://archive.is
2. https://github.com/w3c/libwww
3. https://github.com/SirWumpus/ioccc-mynx
https://github.com/google/oss-fuzz-vulns/tree/main/vulns/cur...
https://github.com/curl/curl/commit/68ffe6c17d6e44b459d60805...
https://www.cvedetails.com/product/25084/Haxx-Curl.html?vend...
Instead of only "thinking a lot about text-based browsers", I have been actively using them on a daily basis for the past 26 years.
Make of this what you will as I am a dumb end user not a genius "developer". I am glad that Links does not use libcurl and that it has its own "bespoke" HTML rendering. In all this time, I still have yet to see any other program produce better rendering of HTML tables as text. I have had few if any problems with Links versions. I am quite good at "breaking" software and for me Links has been quite robust. The source code is readable for me and I have been able to change or "fix" things I do not like, then quickly recompile. Recently I fixed a version of the program so that a certain semantic link would not be shown in Wikipedia pages. No "browser extension" required.
Links' rendering has managed to keep up with the evolution of HTML and web design sufficiently for me. Despite the enormous variation in HTML acrosse the www, there are very few cases where the rendering is unsatisfactory.^1 I cannot say the same for other attempts at text-only clients. W3C's libwww-based line-mode browser still compiles and works,^2 although I would not be satisifed with its rendering. Nor would I be satisfied with edbrowse, or something simpler such as mynx.^3
I use Links primarily for reading and printing HTML. I use a variety of TCP clients for making HTTP requests, including djb's tcpclient which I am quite sure beats libcurl any day of the week in terms quality, e.g., the programming skill level of the author and the care with which it was written. This non-libcurl networking code is relatively small and does not need oss-fuzz. I do not intentionally use libcurl. It is too large and complex for my tastes. For TLS, I mainly use stunnel and haproxy.
1. One rare example I can recall is https://archive.is
2. https://github.com/w3c/libwww
3. https://github.com/SirWumpus/ioccc-mynx
https://github.com/google/oss-fuzz-vulns/tree/main/vulns/cur...
https://github.com/curl/curl/commit/68ffe6c17d6e44b459d60805...
https://www.cvedetails.com/product/25084/Haxx-Curl.html?vend...
Instead of only "thinking a lot about text-based browsers", I have been actively using them on a daily basis for the past 26 years.
Make of this what you will as I am a dumb end user not a genius "developer". I am glad that Links does not use libcurl and that it has its own "bespoke" HTML rendering. In all this time, I still have yet to see any other program produce better rendering of HTML tables as text. I have had few if any problems with Links versions. I am quite good at "breaking" software and for me Links has been quite robust. The source code is readable for me and I have been able to change or "fix" things I do not like, then quickly recompile. Recently I fixed a version of the program so that a certain semantic link would not be shown in Wikipedia pages. No "browser extension" required.
Links' rendering has managed to keep up with the evolution of HTML and web design sufficiently for me. Despite the enormous variation in HTML acrosse the www, there are very few cases where the rendering is unsatisfactory.^1 I cannot say the same for other attempts at text-only clients. W3C's libwww-based line-mode browser still compiles and works,^2 although I would not be satisifed with its rendering. Nor would I be satisfied with edbrowse, or something simpler such as mynx.^3
I use Links primarily for reading and printing HTML. I use a variety of TCP clients for making HTTP requests, including djb's tcpclient which I am quite sure beats libcurl any day of the week in terms quality, e.g., the programming skill level of the author and the care with which it was written. This non-libcurl networking code is relatively small and does not need oss-fuzz. I do not intentionally use libcurl. It is too large and complex for my tastes. For TLS, I mainly use stunnel and haproxy.
1. One rare example I can recall is https://archive.is
2. https://github.com/w3c/libwww
3. https://github.com/SirWumpus/ioccc-mynx