Correlation - The Hard Way in JMeter

This page summarizes the projects mentioned and recommended in the original post on dev.to

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • Learn-JMeter-Series

    ⚡ Learn JMeter Series

  • response = prev.getResponseDataAsString() //Extract the previous response def extractTitle = /(.+?)<\/title>/ def matcher = response =~ extractTitle if (matcher.size() >=1) { println matcher.findAll()[0][1] vars.put("extractTitle",matcher.findAll()[0][1]) }

    Here is the URL https://jpetstore-qainsights.cloud.okteto.net/jpetstore/actions/Catalog.action

    The first step is to read the HTTP response as a string using prev.getResponseDataAsString()

    prev is an API call which extracts the previous SampleResult. Using the method getResponseDataAsString() we can extract the whole response as a string and store it in a variable.

    The next two lines define our regular expression pattern and the matching conditions. Groovy comes with powerful regular expression pattern matching.

    def extractTitle = /(.+?)<\/title>/
    def matcher = response =~ extractTitle</code></pre>
    
    <p>The next block checks for any matches of >=1, then it will print the extracted string from the array list. Then, it will store the value to the variable <code>extractTitle</code> using the <code>vars.put</code> method.</p>
    
    <pre><code>if (matcher.size() >=1) {
        println matcher.findAll()[0][1]
        vars.put("extractTitle",matcher.findAll()[0][1])
    }</code></pre>
    
    <p>Here is the output:</p>
    
    <p><figure><a href="https://qainsights.com/wp-content/uploads/2022/08/image-8.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u-f194J0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://qainsights.com/wp-content/uploads/2022/08/image-8.png" alt="JMeter Output" loading="lazy" width="297" height="111"></a><figcaption>JMeter Output</figcaption></figure></p>
    
    <p>The above method is not effective for a couple of reasons. One, the array index to capture the desired string might be cumbersome for the complex response. Second, typically the pattern we use here is apt for the text response, not for the HTML response. For the complex HTML response, using the regular expression might not yield better performance. </p>
    
    <p><a href="https://github.com/QAInsights/Learn-JMeter-Series/tree/master/Correlation" rel="noopener noreferrer"><span></span><span>GitHub Repo</span></a></p>
    
    <h2>Using JSoup</h2>
    
    <p>To handle the HTML response effectively, it is better to use HTML parsers such as JSoup. </p>
    
    <p><em>JSoup is a Java library for working with real-world HTML. It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors.</em></p>
    
    <p>Let us use <code><a href="https://qainsights.com/upload-files-to-aws-s3-in-jmeter-using-groovy/" rel="noreferrer noopener">Grab</a></code>, so that JMeter will download the dependencies on its own, else you need to download the JSoup jar and keep it in the <code>lib</code> or <code>ext</code> folder. </p>
    
    <pre><code>import org.jsoup.Jsoup
    import org.jsoup.nodes.Document
    import org.jsoup.nodes.Element
    import org.jsoup.select.Elements
    
    @Grab(group='org.jsoup', module='jsoup', version='1.15.2')
    
    response = prev.getResponseDataAsString() // Extract response
    
    Document doc = Jsoup.parse(response)
    println doc.title()</code></pre>
    
    <p><code>doc</code> object will parse the response and print the title to the command prompt in JMeter.</p>
    
    <p>To print all the links and its text, the below code snippet will be useful.</p>
    
    <pre><code>
    import org.jsoup.Jsoup
    import org.jsoup.nodes.Document
    import org.jsoup.nodes.Element
    import org.jsoup.select.Elements
    
    @Grab(group='org.jsoup', module='jsoup', version='1.15.2')
    
    response = prev.getResponseDataAsString() // Extract response
    
    Document doc = Jsoup.parse(response)
    println doc.title()
    
    // To print all the links and its text
    
    Elements links = doc.body().getElementsByTag("a");
    for (Element link : links) {
        String linkHref = link.attr("href");
        String linkText = link.text();
        println linkHref + linkText
    }</code></pre>
    
    <p>To print all the list box elements and random list box values for the url (http://computer-database.gatling.io/computers/new), use the below code snippet.</p>
    
    <pre><code>import org.jsoup.Jsoup
    import org.jsoup.nodes.Document
    import org.jsoup.nodes.Element
    import org.jsoup.select.Elements
    
    @Grab(group='org.jsoup', module='jsoup', version='1.15.2')
    
    response = prev.getResponseDataAsString() // Extract response
    companyList = []
    Random random = new Random()
    
    Document doc = Jsoup.parse(response)
    
    // To print all the list box elements
    
    Elements lists = doc.body().select("select option")
    for (Element list : lists) {
        println "Company is " + list.text()
        companyList.add(list.text())
    }
    
    // To print random list box element
    
    println("The total companies are " + companyList.size())
    println(companyList[random.nextInt(companyList.size())])</code></pre>
    
    <h2>Final Words</h2>
    
    <p>As you learned, by leveraging the <code>prev</code> API we can extract the response and then parse it using JSoup library or by writing desired regular expressions in a hard way without using the built-in elements such as Regular Expression Extractor or JSON Extractor and more. This approach might not save time, but it is worth learning this approach which comes handy in situations like interviews. </p>

  • S3-Upload-JMeter-Groovy

    Upload files to AWS S3 in JMeter using Groovy

  • response = prev.getResponseDataAsString() //Extract the previous response def extractTitle = /(.+?)<\/title>/ def matcher = response =~ extractTitle if (matcher.size() >=1) { println matcher.findAll()[0][1] vars.put("extractTitle",matcher.findAll()[0][1]) }

    Here is the URL https://jpetstore-qainsights.cloud.okteto.net/jpetstore/actions/Catalog.action

    The first step is to read the HTTP response as a string using prev.getResponseDataAsString()

    prev is an API call which extracts the previous SampleResult. Using the method getResponseDataAsString() we can extract the whole response as a string and store it in a variable.

    The next two lines define our regular expression pattern and the matching conditions. Groovy comes with powerful regular expression pattern matching.

    def extractTitle = /(.+?)<\/title>/
    def matcher = response =~ extractTitle</code></pre>
    
    <p>The next block checks for any matches of >=1, then it will print the extracted string from the array list. Then, it will store the value to the variable <code>extractTitle</code> using the <code>vars.put</code> method.</p>
    
    <pre><code>if (matcher.size() >=1) {
        println matcher.findAll()[0][1]
        vars.put("extractTitle",matcher.findAll()[0][1])
    }</code></pre>
    
    <p>Here is the output:</p>
    
    <p><figure><a href="https://qainsights.com/wp-content/uploads/2022/08/image-8.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u-f194J0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://qainsights.com/wp-content/uploads/2022/08/image-8.png" alt="JMeter Output" loading="lazy" width="297" height="111"></a><figcaption>JMeter Output</figcaption></figure></p>
    
    <p>The above method is not effective for a couple of reasons. One, the array index to capture the desired string might be cumbersome for the complex response. Second, typically the pattern we use here is apt for the text response, not for the HTML response. For the complex HTML response, using the regular expression might not yield better performance. </p>
    
    <p><a href="https://github.com/QAInsights/Learn-JMeter-Series/tree/master/Correlation" rel="noopener noreferrer"><span></span><span>GitHub Repo</span></a></p>
    
    <h2>Using JSoup</h2>
    
    <p>To handle the HTML response effectively, it is better to use HTML parsers such as JSoup. </p>
    
    <p><em>JSoup is a Java library for working with real-world HTML. It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors.</em></p>
    
    <p>Let us use <code><a href="https://qainsights.com/upload-files-to-aws-s3-in-jmeter-using-groovy/" rel="noreferrer noopener">Grab</a></code>, so that JMeter will download the dependencies on its own, else you need to download the JSoup jar and keep it in the <code>lib</code> or <code>ext</code> folder. </p>
    
    <pre><code>import org.jsoup.Jsoup
    import org.jsoup.nodes.Document
    import org.jsoup.nodes.Element
    import org.jsoup.select.Elements
    
    @Grab(group='org.jsoup', module='jsoup', version='1.15.2')
    
    response = prev.getResponseDataAsString() // Extract response
    
    Document doc = Jsoup.parse(response)
    println doc.title()</code></pre>
    
    <p><code>doc</code> object will parse the response and print the title to the command prompt in JMeter.</p>
    
    <p>To print all the links and its text, the below code snippet will be useful.</p>
    
    <pre><code>
    import org.jsoup.Jsoup
    import org.jsoup.nodes.Document
    import org.jsoup.nodes.Element
    import org.jsoup.select.Elements
    
    @Grab(group='org.jsoup', module='jsoup', version='1.15.2')
    
    response = prev.getResponseDataAsString() // Extract response
    
    Document doc = Jsoup.parse(response)
    println doc.title()
    
    // To print all the links and its text
    
    Elements links = doc.body().getElementsByTag("a");
    for (Element link : links) {
        String linkHref = link.attr("href");
        String linkText = link.text();
        println linkHref + linkText
    }</code></pre>
    
    <p>To print all the list box elements and random list box values for the url (http://computer-database.gatling.io/computers/new), use the below code snippet.</p>
    
    <pre><code>import org.jsoup.Jsoup
    import org.jsoup.nodes.Document
    import org.jsoup.nodes.Element
    import org.jsoup.select.Elements
    
    @Grab(group='org.jsoup', module='jsoup', version='1.15.2')
    
    response = prev.getResponseDataAsString() // Extract response
    companyList = []
    Random random = new Random()
    
    Document doc = Jsoup.parse(response)
    
    // To print all the list box elements
    
    Elements lists = doc.body().select("select option")
    for (Element list : lists) {
        println "Company is " + list.text()
        companyList.add(list.text())
    }
    
    // To print random list box element
    
    println("The total companies are " + companyList.size())
    println(companyList[random.nextInt(companyList.size())])</code></pre>
    
    <h2>Final Words</h2>
    
    <p>As you learned, by leveraging the <code>prev</code> API we can extract the response and then parse it using JSoup library or by writing desired regular expressions in a hard way without using the built-in elements such as Regular Expression Extractor or JSON Extractor and more. This approach might not save time, but it is worth learning this approach which comes handy in situations like interviews. </p>

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts