quarto-cli
trulens
quarto-cli | trulens | |
---|---|---|
8 | 14 | |
3,304 | 1,612 | |
3.5% | 6.9% | |
10.0 | 9.8 | |
6 days ago | 7 days ago | |
JavaScript | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quarto-cli
- FLaNK AI Weekly 18 March 2024
-
Quarto
Hello, I have a rather specific question.
I want to write a detailed tutorial (as HTML page) and a condensed version of it (as Reveal JS slides) from a single document.
I have found this suggestion[1] to specify the separate output file name for the slides in the header, and `quarto render myfile.qmd` will generate both.
Is there a way to include content (long form text, code, or images) that will only be exported in the HTML page but not in the slides (where space is more limited)?
[1] https://github.com/quarto-dev/quarto-cli/discussions/1751
-
Running Quarto Markdown in Docker
❯ docker build -t cavo789/quarto . [+] Building 208.2s (13/13) FINISHED docker:default => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 2.08kB 0.0s => [internal] load metadata for docker.io/eddelbuettel/r2u:20.04 3.4s => CACHED [ 1/10] FROM docker.io/eddelbuettel/r2u:20.04@sha256:133b40653e0ad564d348f94ad72c753b97fb28941c072e69bb6e03c3b8d6c06e 0.0s => [ 2/10] RUN set -e -x && apt-get update && apt-get install -y --no-install-recommends pandoc pandoc-citeproc curl gdebi-core librsvg2-bin python3.8 47.6s => [ 3/10] RUN set -e -x && install.r shiny jsonlite ggplot2 htmltools remotes renv knitr rmarkdown quarto 27.2s => [ 4/10] RUN set -e -x && curl -o quarto-linux-amd64.deb -L https://github.com/quarto-dev/quarto-cli/releases/download/v1.4.529/quarto-1.4.529-linux-amd64.deb && gdebi - 12.1s => [ 5/10] RUN set -e -x && groupadd -g 1000 -o "quarto" && useradd -m -u 1000 -g 1000 -o -s /bin/bash "quarto" 0.5s => [ 6/10] RUN set -e -x && quarto install tool tinytex --update-path 23.0s => [ 7/10] RUN set -e -x && printf "\e[0;105m%s\e[0;0m\n" "Run tlmgr update" && ~/.TinyTeX/bin/x86_64-linux/tlmgr update --self --all && ~/.TinyTeX/bin/x86_64-linux/fm 77.9s => [ 8/10] RUN set -e -x && printf "\e[0;105m%s\e[0;0m\n" "Run tlmgr install for a few tinyText packages (needed for PDF conversion)" && ~/.TinyTeX/bin/x86_64-linux/tlmgr 11.7s => [ 9/10] RUN set -e -x && mkdir -p /input 0.5s => exporting to image 4.0s => => exporting layers 4.0s => => writing image sha256:fe1d20bd71a66eb574ba1f5b35c988ace57c2c30f93159caa4d5de2f8c490eb0 0.0s => => naming to docker.io/cavo789/quarto 0.0s What's Next? View summary of image vulnerabilities and recommendations → docker scout quickview
-
Quarto document rendered via quarto::quarto_render(): How to implement citations?
I had some trouble following this but I think what you're saying is the ` [@Bernhofer2021.02.23.432527]` tag isn't getting converted to the actual bib reference - is that right? I just copied this into my system and I could make that part work fine - using my own .bib file of course, and I used this csl which I copied locally. The one change I made to the setup was to put both the .bib and the .csl file in my working directory where the .qmd file is, and also as I commented on a different post of yours from the other day, I make sure there's no spaces in the path to my working directory (for either the folder names or the filenames). So for me, everything is in C:\Users\xxxx\workingdir - this is due to a known RStudio issue with spaces. Who knows if that's what you're running into or not.
-
Quarto: Mermaid rendering in word: code-execution halts after format is generated, waiting indefinitely for a chrome-process to close
You should ask in the Quarto discussion group on their GitHub. They are extremely reactive if you can give a MWE.
- quarto-cli: Open-source scientific and technical publishing system built on Pandoc.
- The Jupyter+Git problem is now solved
trulens
-
Why Vector Compression Matters
Retrieval using a single vector is called dense passage retrieval (DPR), because an entire passage (dozens to hundreds of tokens) is encoded as a single vector. ColBERT instead encodes a vector-per-token, where each vector is influenced by surrounding context. This leads to meaningfully better results; for example, here’s ColBERT running on Astra DB compared to DPR using openai-v3-small vectors, compared with TruLens for the Braintrust Coda Help Desk data set. ColBERT easily beats DPR at correctness, context relevance, and groundedness.
- FLaNK AI Weekly 18 March 2024
-
First 15 Open Source Advent projects
12. TruLens by TruEra | Github | tutorial
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
How are generative AI companies monitoring their systems in production?
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
- FLaNK Stack Weekly 28 August 2023
-
[P] TruLens-Eval is an open source project for eval & tracking LLM experiments.
The team at TruEra recently released an open source project for evaluation & tracking of LLM applications called TruLens-Eval. We’ve specifically targeted retrieval-augmented QA as a core use case and so far we’ve seen it used for comparing different models and parameters, prompts, vector-db configurations and query planning strategies. I’d love to get your feedback on it.
- [D] Hardest thing about building with LLMs?
- Stop Evaluating LLMs on Vibes
- OSS library for attribution and interpretation methods for deep nets
What are some alternatives?
jupyter-book - Create beautiful, publication-quality books and documents from computational content.
langfuse - 🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
ipyflow - A reactive Python kernel for Jupyter notebooks.
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Pluto.jl - 🎈 Simple reactive notebooks for Julia
probability - Probabilistic reasoning and statistical analysis in TensorFlow
jupyterlab-git - A Git extension for JupyterLab
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
github-orgmode-tests - This is a test project where you can explore how github interprets Org-mode files
embedchain - Personalizing LLM Responses
jupyter - An interface to communicate with Jupyter kernels.
machine_learning_basics - Plain python implementations of basic machine learning algorithms