Groovy Grape Projects
-
response = prev.getResponseDataAsString() //Extract the previous response def extractTitle = /(.+?)<\/title>/ def matcher = response =~ extractTitle if (matcher.size() >=1) { println matcher.findAll()[0][1] vars.put("extractTitle",matcher.findAll()[0][1]) }
Here is the URL https://jpetstore-qainsights.cloud.okteto.net/jpetstore/actions/Catalog.action
The first step is to read the HTTP response as a string using
prev.getResponseDataAsString()
prev
is an API call which extracts the previous SampleResult. Using the methodgetResponseDataAsString()
we can extract the whole response as a string and store it in a variable.The next two lines define our regular expression pattern and the matching conditions. Groovy comes with powerful regular expression pattern matching.
def extractTitle = /(.+?)<\/title>/ def matcher = response =~ extractTitle
The next block checks for any matches of >=1, then it will print the extracted string from the array list. Then, it will store the value to the variable
extractTitle
using thevars.put
method.if (matcher.size() >=1) { println matcher.findAll()[0][1] vars.put("extractTitle",matcher.findAll()[0][1]) }
Here is the output:
The above method is not effective for a couple of reasons. One, the array index to capture the desired string might be cumbersome for the complex response. Second, typically the pattern we use here is apt for the text response, not for the HTML response. For the complex HTML response, using the regular expression might not yield better performance.
Using JSoup
To handle the HTML response effectively, it is better to use HTML parsers such as JSoup.
JSoup is a Java library for working with real-world HTML. It provides a very convenient API for fetching URLs and extracting and manipulating data, using the best of HTML5 DOM methods and CSS selectors.
Let us use
Grab
, so that JMeter will download the dependencies on its own, else you need to download the JSoup jar and keep it in thelib
orext
folder.import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.nodes.Element import org.jsoup.select.Elements @Grab(group='org.jsoup', module='jsoup', version='1.15.2') response = prev.getResponseDataAsString() // Extract response Document doc = Jsoup.parse(response) println doc.title()
doc
object will parse the response and print the title to the command prompt in JMeter.To print all the links and its text, the below code snippet will be useful.
import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.nodes.Element import org.jsoup.select.Elements @Grab(group='org.jsoup', module='jsoup', version='1.15.2') response = prev.getResponseDataAsString() // Extract response Document doc = Jsoup.parse(response) println doc.title() // To print all the links and its text Elements links = doc.body().getElementsByTag("a"); for (Element link : links) { String linkHref = link.attr("href"); String linkText = link.text(); println linkHref + linkText }
To print all the list box elements and random list box values for the url (http://computer-database.gatling.io/computers/new), use the below code snippet.
import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.nodes.Element import org.jsoup.select.Elements @Grab(group='org.jsoup', module='jsoup', version='1.15.2') response = prev.getResponseDataAsString() // Extract response companyList = [] Random random = new Random() Document doc = Jsoup.parse(response) // To print all the list box elements Elements lists = doc.body().select("select option") for (Element list : lists) { println "Company is " + list.text() companyList.add(list.text()) } // To print random list box element println("The total companies are " + companyList.size()) println(companyList[random.nextInt(companyList.size())])
Final Words
As you learned, by leveraging the
prev
API we can extract the response and then parse it using JSoup library or by writing desired regular expressions in a hard way without using the built-in elements such as Regular Expression Extractor or JSON Extractor and more. This approach might not save time, but it is worth learning this approach which comes handy in situations like interviews. -
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
Groovy Grape related posts
Index
Project | Stars | |
---|---|---|
1 | S3-Upload-JMeter-Groovy | 1 |