RawParser
nbdev
RawParser | nbdev | |
---|---|---|
3 | 45 | |
8 | 4,744 | |
- | 0.6% | |
0.0 | 6.5 | |
almost 2 years ago | 9 days ago | |
C | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RawParser
-
Literate programming is much more than just commenting code
I have started working on a program that can parse Markdown files with fragments of C code and weave those fragments into a C program that can be compiled. For an example input, see https://github.com/FransFaase/RawParser#documentation
- Show HN: JWEB (a modern implementation of the CWEB Literate Programming system)
-
Show HN: Carburetta – C/C++ Fused Scanner and Parser Generator
The distinction between a scanner and a parser is somewhat arbitrary. One could use one and the same formalism for it. The scanner usually deals with things that are considered 'atomic' elements in the language, while grammar is used for 'compound' elements consisting of one or more other elements. If there are seen as one and the same, than it naturally flows that the scanner is called from the parser, and not how it is traditionally done, that the scanner acts as a first pass. This seems a logical approach, but in practices, when scanning is context sensitive, requires the implementation of all kinds of hacks. Also, the treatment of keywords (where it is possible that they are case insensitive) it is better to have a grammar for parsing a keyword 'identifier' and a check whether the result matches the keyword. For pure performance this would not be the best solution, but I understand that Carburetta is not design for that. I have been developing a parser that makes no distinction between scanning and parsing in C, which I called RawParser: https://github.com/FransFaase/RawParser . It also offers more powerful grammar constructs and gives examples on how to implement memory management in a uniform way.
nbdev
- The Jupyter+Git problem is now solved
-
What is literate programming used for?
One example I've seen is ML/DL folks using jupyter notebooks to develop DL libraries in jupyter notebooks, see https://github.com/fastai/nbdev
-
GitHub Accelerator: our first cohort and what's next
- https://github.com/fastai/nbdev: Increase developer productivity by 10x with a new exploratory programming workflow.
-
Startups are in first batch of GitHub OS Accelerator
9. Nbdev: Boost developer productivity with an exploratory programming workflow - https://nbdev.fast.ai/
-
Start learning python for a Statistician with SAS experience and little R experience
See if you like nbdev way of working with data through python and jupyter. nbdev is an optional part that will create python packages from jupyter notebooks. Also even the simple tutorials are opinionated and will guide you to unit test your code and write CICD pipelines.
- FastKafka - free open source python lib for building Kafka-based services
-
isn't this just too much for a take home assignment?
You probably don’t have time for this for the purposes of your task, but I will also throw in the recommendation of nbdev especially if you’re a Python person. I haven’t had a project to use it on yet, but I’ve gone through the docs and the walkthrough and it seems like a great framework for starting potential projects with all the infrastructure needed for if/when they eventually get big and need all the packaging and stuff
-
Any experience dealing with a non-technical manager?
nbdev: jupyter notebooks -> python package
-
Resources to bridge the gap between jupyter notebooks and regular python development
Take a look at https://github.com/fastai/nbdev - haven't used it but supposedly the whole if fast.ai library was written that way. It sounds like a natural direction in your scenario - allowing your to keep working in a familiar environment and still producing production ready code (will, at least in paper 😅)
- Rant: Jupyter notebooks are trash.
What are some alternatives?
clerk - ⚡️ Moldable Live Programming for Clojure
papermill - 📚 Parameterize, execute, and analyze notebooks
sicmutils - Computer Algebra, Physics and Differential Geometry in Clojure.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
mexdown - A lightweight integrating markup language
dbt - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications. [Moved to: https://github.com/dbt-labs/dbt-core]
jupytext - Jupyter Notebooks as Markdown Documents, Julia, Python or R scripts
rr - Record and Replay Framework
Jupyter-PowerShell - Jupyter Kernel for PowerShell
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
black - The uncompromising Python code formatter
Nim - Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).