There are several tutorials that describe how to scrape websites with request and cheerio. In these tutorials they send the output to the console or stream the DOM with fs into a file as seen in the example below.
request(link, function (err, resp, html) {
if (err) return console.error(err)
var $ = cheerio.load(html),
img = $('#img_wrapper').data('src');
console.log(img);
}).pipe(fs.createWriteStream('img_link.txt'));
But what if I would like to process the output during script execution? How can I access the output or send it back to the calling function? I, of course, could load img_link.txt and get the information from there, but this would be to costly and doesn't make sense.