dplyr
rvest
dplyr | rvest | |
---|---|---|
40 | 13 | |
4,658 | 1,471 | |
0.5% | 0.5% | |
7.1 | 7.2 | |
6 days ago | 2 months ago | |
R | R | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dplyr
-
Show HN: Open-source, browser-local data exploration using DuckDB-WASM and PRQL
That's great feedback, thanks!
This tool definitely comes from a place of personal need - beyond just handling large files, I've also never really gelled well with the Excel/Google Sheet model of changing data in place as if you were editing text. I'm a Data Scientist and always preferred the chained data transforms you see in things like dplyr (https://dplyr.tidyverse.org/) or Polars (https://pola.rs/) and I feel this tool maps very closely to the chained model.
Also, thank you for the feature requests! Those would all be very useful - we'll put them on the roadmap.
-
IS it possible for a R package to set an R option that only affects that package?
There's an example of how to use zzz.R with a .onload() function to set options in the dplyr code base: https://github.com/tidyverse/dplyr/blob/bbcfe99e29fe737d456b0d7adc33d3c445a32d9d/R/zzz.r
-
Calculation within a data table by calling on specific values in two columns
Look at the tidyverse, especially the case_when or mutate functions.
-
PSA: You don't need fancy stuff to do good work.
Before diving into advanced machine learning algorithms or statistical models, we need to start with the basics: collecting and organizing data. Fortunately, both Python and R offer a wealth of libraries that make it easy to collect data from a variety of sources, including web scraping, APIs, and reading from files. Key libraries in Python include requests, BeautifulSoup, and pandas, while R has httr, rvest, and dplyr.
-
Creating data frame
It looks like your syntax is wrong. I think you’re trying to calculate a new variables in your data frame, or alter an existing column in a data frame. Have a look at the select() function in this reference for the proper syntax to use. https://dplyr.tidyverse.org/ Does that help?
-
I'm designing a shirt for a friend, it has 4 embroidered images of things they like/do. One thing is coding, they use R... I'm wondering two things. 1) What's a good image or piece of code or something that I should use? and 2) should I even add it to the design the shirt?
A lot of populat libraries have their own logos. Maybe one of them would be good. Check out dplyr for example: https://dplyr.tidyverse.org/
-
Anyone use Python for statistics, particularly DOE or QA/QC? What are your thoughts?
I hope you give it a try when you get a chance: https://dplyr.tidyverse.org/
-
Rstudio tidyverse help!
You can read up on the dplyr-verbs here, which I strongly suggest for your exam! In the code examples, you can simply click on any function you don't understand and it will take you directly to the documentation. Good Luck!
- Beginner question
- osdc-2023-assignment1
rvest
-
Collecting Data from News Articles using Web Scraping - Help
You’re looking for the rvest package
-
PSA: You don't need fancy stuff to do good work.
Before diving into advanced machine learning algorithms or statistical models, we need to start with the basics: collecting and organizing data. Fortunately, both Python and R offer a wealth of libraries that make it easy to collect data from a variety of sources, including web scraping, APIs, and reading from files. Key libraries in Python include requests, BeautifulSoup, and pandas, while R has httr, rvest, and dplyr.
-
Average price of an ounce of medium/high-quality marijuana in each U.S. state, April 2023 [OC]
Tools: R + Rvest to scrape and clean the data. D3 to create the map. Svelte to put it all together.
- Estoy haciendo un DDoS?
-
AHR Summoning Statistics: 40 Summons and First Summon
so ik R has packages and native functions to help bypass this manual process. Eg scraping the wiki / gamepress unit list with Rvest may prove easier, furthermore you can specify web based sources when reading data. I'm not giga familiar with doing either myself, but maybe you can scrape data from the wikis or from repositories like the feh assets 1. But if youre able to set up a simple R script to read in new data and transform / clean it and save manual updates every 2 weeks
-
Webscraping Google Search results and extracting the urls
There are very similar tools in R that I cover in that tutorial. For example, rvest or xml2 should be able to do the job as both of them support XPath selectors (you can take the ones from the article - they should work in R too).
-
Made an app where you can search for money diaries by location or income
To get the data from the website, I need to use the package (a set of R code someone created and shared that's designed for a certain task) rvest, then I did a bunch of data munging in R to pull out the location/salary/age/etc. I saved that in a dataset and then used another package flexdashboard to make a webpage which I can essentially "one-click" publish using a free tool called RPubs.
-
Used Cars Data Scraping - R & Github Actions & AWS
It came up with the idea of how to combine Data Engineering with Cloud and automation. I needed to find a data source as it would be an automated pipeline, so I needed a dynamic source. At the same time, I wanted to find a site where I thought retrieving data would not be a problem and do practice with both rvest and dplyr. After I had no problems with my experiments with Carvago, I added the necessary data cleaning steps. Another thing I aimed for in the project was to keep the data in different ways in different environments. While raw (daily CSV) and processed data were written to the Github repo, I wrote the processed data to PostgreSQL on AWS RDS. In addition, I sync the raw and processed data to S3 to be able to use it with Athena. However, I have separated some stages for GitHub Actions to be a good practice. For example, in the first stage, I added synchronization with AWS S3 as a separate action while scraping data, cleaning, and printing fundamental analysis to a simple log file. If there is no error after all this, I added a report with RMarkdown and the action that will be published on github.io. Thus, I created an end-to-end data pipeline where the data from the source is made to offer basic reporting with simple processing.
-
Saving the Text from a News Article in R?
I would try some more nuanced web scraping with a package like rvest
-
How to convert large xml file to csv/sheet format
1) Use rvest to extract the contents of the XML file (i.e. loop over top-level nodes and pull any variable you're interested in into a column).
What are some alternatives?
worldfootballR - A wrapper for extracting world football (soccer) data from FBref, Transfermark, Understat and fotmob
r-web-scraping-cheat-sheet - Guide, reference and cheatsheet on web scraping using rvest, httr and Rselenium.
Rustler - Safe Rust bridge for creating Erlang NIF functions
r4ds - R for data science: a book
ggplot2 - An implementation of the Grammar of Graphics in R
pokemon-games-ratings - Dataset and visualizations of Pokemon Game Ratings, from scraping metacritic.com.
nx - Multi-dimensional arrays (tensors) and numerical definitions for Elixir
blackmagic - 🎩 Automagically Convert XML to JSON an JSON to XML
explorer - Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir
money_diaries - An interactive web app for searching and filtering money diaries
rmarkdown - Dynamic Documents for R
flexdashboard - Easy interactive dashboards for R