S3-Upload-JMeter-Groovy
nextflow
S3-Upload-JMeter-Groovy | nextflow | |
---|---|---|
3 | 9 | |
2 | 2,551 | |
- | 1.4% | |
3.9 | 9.7 | |
about 2 years ago | 7 days ago | |
Groovy | Groovy | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
S3-Upload-JMeter-Groovy
-
Correlation - The Hard Way in JMeter
response = prev.getResponseDataAsString() //Extract the previous response def extractTitle = /(.+?)<\/title>/ def matcher = response =~ extractTitle if (matcher.size() >=1) { println matcher.findAll()[0][1] vars.put("extractTitle",matcher.findAll()[0][1]) }
Here is the URL https://jpetstore-qainsights.cloud.okteto.net/jpetstore/actions/Catalog.action
The first step is to read the HTTP response as a string using
prev.getResponseDataAsString()
prev
is an API call which extracts the previous SampleResult. Using the methodgetResponseDataAsString()
we can extract the whole response as a string and store it in a variable.The next two lines define our regular expression pattern and the matching conditions. Groovy comes with powerful regular expression pattern matching.
def extractTitle = /(.+?)<\/title>/ def matcher = response =~ extractTitle</code></pre> <p>The next block checks for any matches of >=1, then it will print the extracted string from the array list. Then, it will store the value to the variable <code>extractTitle</code> using the <code>vars.put</code> method.</p> <pre><code>if (matcher.size() >=1) { println matcher.findAll()[0][1] vars.put("extractTitle",matcher.findAll()[0][1]) }</code></pre> <p>Here is the output:</p> <p><figure><a href="https://qainsights.com/wp-content/uploads/2022/08/image-8.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u-f194J0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://qainsights.com/wp-content/uploads/2022/08/image-8.png" alt="JMeter Output" loading="lazy" width="297" height="111"></a><figcaption>JMeter Output</figcaption></figure></p> <p>The above method is not effective for a couple of reasons. One, the array index to capture the desired string might be cumbersome for the complex response. Second, typically the pattern we use here is apt for the text response, not for the HTML response. For the complex HTML response, using the regular expression might not yield better performance. </p> <p><a href="https://github.com/QAInsights/Learn-JMeter-Series/tree/master/Correlation" rel="noopener noreferrer"><span></span><span>GitHub Repo</span></a></p> <h2>Using JSoup</h2> <p>To handle the HTML response effectively, it is better to use HTML parsers such as JSoup. </p> <p><em>JSoup is a Java library for working with real-world HTML. It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors.</em></p> <p>Let us use <code><a href="https://qainsights.com/upload-files-to-aws-s3-in-jmeter-using-groovy/" rel="noreferrer noopener">Grab</a></code>, so that JMeter will download the dependencies on its own, else you need to download the JSoup jar and keep it in the <code>lib</code> or <code>ext</code> folder. </p> <pre><code>import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.nodes.Element import org.jsoup.select.Elements @Grab(group='org.jsoup', module='jsoup', version='1.15.2') response = prev.getResponseDataAsString() // Extract response Document doc = Jsoup.parse(response) println doc.title()</code></pre> <p><code>doc</code> object will parse the response and print the title to the command prompt in JMeter.</p> <p>To print all the links and its text, the below code snippet will be useful.</p> <pre><code> import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.nodes.Element import org.jsoup.select.Elements @Grab(group='org.jsoup', module='jsoup', version='1.15.2') response = prev.getResponseDataAsString() // Extract response Document doc = Jsoup.parse(response) println doc.title() // To print all the links and its text Elements links = doc.body().getElementsByTag("a"); for (Element link : links) { String linkHref = link.attr("href"); String linkText = link.text(); println linkHref + linkText }</code></pre> <p>To print all the list box elements and random list box values for the url (http://computer-database.gatling.io/computers/new), use the below code snippet.</p> <pre><code>import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.nodes.Element import org.jsoup.select.Elements @Grab(group='org.jsoup', module='jsoup', version='1.15.2') response = prev.getResponseDataAsString() // Extract response companyList = [] Random random = new Random() Document doc = Jsoup.parse(response) // To print all the list box elements Elements lists = doc.body().select("select option") for (Element list : lists) { println "Company is " + list.text() companyList.add(list.text()) } // To print random list box element println("The total companies are " + companyList.size()) println(companyList[random.nextInt(companyList.size())])</code></pre> <h2>Final Words</h2> <p>As you learned, by leveraging the <code>prev</code> API we can extract the response and then parse it using JSoup library or by writing desired regular expressions in a hard way without using the built-in elements such as Regular Expression Extractor or JSON Extractor and more. This approach might not save time, but it is worth learning this approach which comes handy in situations like interviews. </p>
-
Upload files to AWS S3 in k6
In my last post, we discussed how to upload files to AWS S3 in JMeter using Groovy. We have also seen Grape package manager on JMeter. Recently, k6 announced its next iteration with a lot of new features and fixes. In this blog post, we are going to see how to upload files to AWS S3 in k6.
-
Upload files to AWS S3 in JMeter using Groovy
Here is the repository to download the sample JMeter test plan for your reference.
nextflow
-
Nextflow: Data-Driven Computational Pipelines
> It's been a while since you can rerun/resume Nextflow pipelines
Yes, you can resume, but you need your whole upstream DAG to be present. Snakemake can rerun a job when only the dependencies of that job are present, which allows to neatly manage the disk usage, or archive an intermediate state of a project and rerun things from there.
> and yes, you can have dry runs in Nextflow
You have stubs, which really isn't the same thing.
> I have no idea what you're referring to with the 'arbitrary limit of 1000 parallel jobs' though
I was referring to this issue: https://github.com/nextflow-io/nextflow/issues/1871. Except, the discussion doesn't give the issue a full justice. Nextflow spans each job in a separate thread, and when it tries to span 1000+ condor jobs it die with a cryptic error message. The option of -Dnxf.pool.type=sync and -Dnxf.pool.maxThreads=N prevents the ability to resume and attempts to rerun the pipeline.
> As for deleting temporary files, there are features that allow you to do a few things related to that, and other features being implemented.
There are some hacks for this - but nothing I would feel safe to integrate into a production tool. They are implementing something - you're right - and it's been the case for several years now, so we'll see.
Snakemake has all that out of the box.
-
Alternatives to nextflow?
For now, I think that the best place to track this / get your voice heard is this GitHub Discussions post (which covers many things - error reporting is one of them). https://github.com/nextflow-io/nextflow/discussions/3107
- HyperQueue: ergonomic HPC task executor written in Rust
-
Nextflow vs Snakemake
We could spend the day pointing to things we wish were different, but that doesn't change the fact that Nextflow is the leader when it comes to workflow orchestration. And feel free to create a new issue in the GitHub repository if you wish to request a feature :)
-
Feel very hard writing nextflow pipeline.
The nextflow devs have been talking about this for a while on GitHub. Looks like they're implementing something along these lines using schema like they do for nf-core. GitHub discussion.
-
Need a statically typed Python replacement
Groovy definitely scales up just fine I think but I never used it myself outside of little snippets embedded in my DSL, I know its considered by some to be "dead" so its interesting to see what other JVM-ecosystem users think of it.
What are some alternatives?
job-dsl-plugin - A Groovy DSL for Jenkins Jobs - Sweeeeet!
galaxy - Data intensive science for everyone.
Learn-JMeter-Series - ⚡ Learn JMeter Series
argo - Workflow Engine for Kubernetes
job-dsl-gradle-example - An example Job DSL project that uses Gradle for building and testing.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
k6 - A modern load testing tool, using Go and JavaScript - https://k6.io
singularity - Singularity has been renamed to Apptainer as part of us moving the project to the Linux Foundation. This repo has been persisted as a snapshot right before the changes.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
devops-resources - DevOps resources - Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP
Hubitat-iPhone-Presence-Sensor - A virtual presence sensor for Hubitat that checks if an iPhone/Android is on the WiFi network.
mag - Assembly and binning of metagenomes