pytudes
cl-ppcre
pytudes | cl-ppcre | |
---|---|---|
100 | 13 | |
22,397 | 291 | |
- | 0.0% | |
8.3 | 3.7 | |
16 days ago | 11 days ago | |
Jupyter Notebook | Common Lisp | |
MIT License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytudes
-
Ask HN: High quality Python scripts or small libraries to learn from
Peter Norvig's work is great to learn from https://github.com/norvig/pytudes
- Norvig's 2023 Advent of Code
- Ask HN: How to build mastery in Python?
- SQL for Data Scientists in 100 Queries
- Bicycling Statistics
-
Ask HN: How to deal with the short vs. long function argument
I've been a programmer for 25 years. A realization that has crept up on me in the last 5 is that not everyone thinks that functions should be short: there are two cultures, with substantial numbers of excellent programmers belonging to both. My question is: how do we maintain harmonious, happy, and productive teams when people can disagree strongly about this issue?
The short-functions camp holds that functions should be short, tend toward the declarative, and use abstraction/implementation-hiding to increase readability (i.e. separable subsections of the function body should often be broken out into well-named helper functions). As an example, look at Peter Norvig's beautiful https://github.com/norvig/pytudes. For a long time I thought that this was how all "good programmers" thought code should be written. Personally, I spent over a decade writing in a dynamic and untyped language, and the only way that I and my colleagues could make that stuff reliable was to write code adhering to the tenets of the short-function camp.
The long-functions camp is, admittedly, alien to me, but I'll try to play devil's advocate and describe it as I think its advocates would. It holds that lots of helper functions are artificial, and actually make it _harder_ to read and understand the code. They say that they like "having lots of context", i.e. seeing all the implementation in one long procedural flow, even though the local variables fall into non-interacting subsets that don't need to be in the same scope. They hold that helper functions destroy the linear flow of the logic, and that they should typically not be created unless there are multiple call sites.
The short-function camp also claims an advantage regarding testability.
Obviously languages play a major role in this debate: e.g. as mentioned above, untyped dynamic languages encourage short functions, and languages where static compilation makes strong guarantees regarding semantics at least make the long-function position more defensible. Expression-oriented and FP-influenced languages encourage short functions. But it's not obvious, e.g. Rust could go both ways based on the criteria just mentioned.
Anyway, more qualified people could and have written at much greater length about the topic. The questions I propose for discussion include
- Is it "just a matter of taste", or is this actually a more serious matter where there is often an objective reason for discouraging the practices of one or other camp?
- How can members of the different camps get along harmoniously in the same team and the same codebase?
-
Pytudes
I have the same impression. Reading the code, he uses global variables [1], obscure variable (k, bw, fw, x) and module names ("pal.py" instead of "palindromes.py"), doesnβt respect conventions about naming in general (uppercase arguments [2], which even the GitHub syntax highlighter is confused about). This feels like code you write for yourself to play with Python and donβt plan to read later.
Some parts of the code feel like what I would expect from a junior dev who started learning the language a couple weeks ago.
[1]: https://github.com/norvig/pytudes/blob/952675ffc70f3632e70a7...
[2]: https://github.com/norvig/pytudes/blob/952675ffc70f3632e70a7...
- Ask HN: Where do I find good code to read?
-
Using Prolog in Windows NT Network Configuration (1996)
Prolog is excellent for bikeshedding, in fact that might be its strongest axis. It starts with everything you get in a normal language such as naming things, indentation, functional purity vs side effects, where to break code into different files and builds on that with having your names try to make sense in declarative, relational, logical and imperative contexts, having your predicates (functions) usable in all modes - and then performant in all modes - having your code be deterministic, and then deterministic in all modes. Being 50 years old there are five decades of learning "idiomatic Prolog" ideas to choose from, and five decades of footguns pointing at your two feet; it has tabling, label(l)ing, SLD and SLG resolution to choose from. Built in constraint solvers are excellent at tempting you into thinking your problem will be well solved by the constraint solvers (it won't be, you idiot, why did you think that was a constraint problem?), two different kinds of arithmetic - one which works but is bad and one which mostly works on integers but clashes with the Prolog solver - and enough metaprogramming that you can build castles in the sky which are very hard to debug instead of real castles. But wait, there's more! Declarative context grammars let you add the fun of left-recursive parsing problems to all your tasks, while attributed variables allow the Prolog engine to break your code behind the scenes in new and interesting ways, plenty of special syntax not to be sneezed at (-->; [_|[]] {}\[]>>() \X^+() =.. #<==> atchoo (bless you)), a delightful deep-rooted schism between text as linked lists of character codes or text as linked lists of character atoms, and always the ISO-Standard-Sword of Damocles hanging over your head as you look at the vast array of slightly-incompatible implementations with no widely accepted CPython-like-dominant-default.
Somewhere hiding in there is a language with enough flexibility and metaprogramming to let your meat brain stretch as far as you want, enough cyborg attachments to augment you beyond plain human, enough spells and rituals to conjour tentacled seamonsters with excellent logic ability from the cold Atlantic deeps to intimidate your problem into submission.
Which you, dear programmer, can learn to wield up to the advanced level of a toddler in a machine shop in a mere couple of handfuls of long years! Expertise may take a few lifetimes longer - in the meantime have you noticed your code isn't pure, doesn't work in all modes, isn't performant in several modes, isn't using the preferred idiom style, is non-deterministic, can't be used to generate as well as test, falls into a left-recursive endless search after the first result, isn't compatible with other Prolog Systems, and your predicates are poorly named and you use the builtin database which is temptingly convenient but absolutely verboten? Plenty for you to be getting on with, back to the drawing boar...bikeshed with you.
And, cut! No, don't cut; OK, green cuts but not red cuts and I hope you aren't colourblind. Next up, coroutines, freeze, PEngines, and the second 90%.
Visit https://www.metalevel.at/prolog and marvel as a master deftly disecting problems, in the same way you marvel at Peter Norvig's Pytudes https://github.com/norvig/pytudes , and sob as the wonders turn to clay in your ordinary hands. Luckily it has a squeaky little brute force searcher, dutifully headbutting every wall as it explores all the corners of your problem on its eventual way to an answer, which you can always rely on. And with that it's almost like any other high level mostly-interpreted dynamic programming / scripting language.
cl-ppcre
-
Compile time regular expression in C++
I've never used cl-ppcre myself, but its docs[1] claim that it provides compile-time regexes:
> CL-PPCRE uses compiler macros to pre-compile scanners at load time if possible. This happens if the compiler can determine that the regular expression (no matter if it's a string or an S-expression) is constant at compile time and is intended to save the time for creating scanners at execution time (probably creating the same scanner over and over in a loop).
[1]: https://edicl.github.io/cl-ppcre/
- Ask HN: What are some of the most elegant codebases in your favorite language?
-
sbcl and Let Over Lambda
A few weeks back Xach recommended cl-ppcre which i found educational.
-
-π- 2022 Day 1 Solutions -π-
For simple string processing, there are some functions in the language, that you can find listed here (for string-specific functions) and here (for more generic sequence-handling functions). For anything involving regular expressions, cl-ppcre is the way, in particular the split and register-groups-bind functions.
-
The unreasonable effectiveness of f-strings and re.VERBOSE
I must have a serious bug in my writing about this, because this was never about regex engines -- it's about literals and domain-specific sublanguages in general. Composing DSL programs by string concatenation is such a famous source of security bugs you see it in top-10 lists. I linked to the very similar example of a PEG parsing DSL.
But any regex engine that can work with a parse tree shows the same principle, e.g. https://edicl.github.io/cl-ppcre/#create-scanner2
-
Adding Space to subst function
Take a look at - https://github.com/edicl/cl-ppcre
-
Common Lisp ASDF maintainer considers resignation
And here's what I believe represents the reality of the situation... Stas was indeed tired of ASDF's changes. Now the nature of what changes to make is a matter of judgement of course, but in this case (I'm thinking of SBCL's bug report request to update ASDF: https://bugs.launchpad.net/sbcl/+bug/1826074), it would be a different matter altogether if the discussion was centered on how best to make the new ASDF work with SBCL, but the thread reads to me like a man who had to put up with too much breakage for the upteenth time. Now, if (for the sake of argument :D) the change was of the necessary kind -- think hardware changes or security issues -- I can still see myself feeling wronged, it's human to do so. Because I don't trust ASDF anymore or I feel as if they (or other people at each step of the process) have not shared enough of the burden. But from the discussions I have read (https://github.com/edicl/cl-ppcre/pull/30) what the ASDF maintainers want to change does not seem unreasonable and they are willing to share the burden. But let us say it's truly a 50/50 deadlock. Well then Linus is right, show us the code, who dares wins. And Stas certainly has enough on his plate. But that's why we must cooperate. You don't have to be a diplomat to know the difference when two people want to work together and when one party wants out. And this setting makes more sense when you read (https://bugs.launchpad.net/sbcl/+bug/1823442) where Stas honestly states he wants nothing more to do with ASDF. I don't think it's unreasonable to surmise there's a bit more going on here than plainly technical issues.
-
Stas has alienated long-time ASDF maintainer Robert Goldman
Could you just direct me to some existing discussions, in order to save time? I already read this one.
-
#"<your literal interpretation here>" (regular expression literals)
I plan to use the regular expressions with a cl-ppcre wrapper, also emulating various clojure regular expression operations. Similar to re21, which doesn't quite support the operations in the way I'd like (or match the clojure operations), and whose regular expression literal syntax is "#//".
What are some alternatives?
paip-lisp - Lisp code for the textbook "Paradigms of Artificial Intelligence Programming"
sbcl - Mirror of Steel Bank Common Lisp (SBCL)'s official repository
asgi-correlation-id - Request ID propagation for ASGI apps
one-more-re-nightmare - A fast regular expression compiler in Common Lisp
clerk - β‘οΈ Moldable Live Programming for Clojure
aoc2022
nbmake - π Pytest plugin for testing notebooks
advents-of-code - ππ Solutions for the yearly advent of code challenges
PySimpleGUI - Python GUIs for Humans! PySimpleGUI is the top-rated Python application development environment. Launched in 2018 and actively developed, maintained, and supported in 2024. Transforms tkinter, Qt, WxPython, and Remi into a simple, intuitive, and fun experience for both hobbyists and expert users.
advent-of-code-2022 - back to rust, except i'll use libs where it makes sense
project-based-learning - Curated list of project-based tutorials
advent-of-code - All my advent of code projects