Error in fetching pdf file from a website using google apps script UrlFetchApp

3 hours ago 1
ARTICLE AD BOX

I have a Google Apps Script that downloads a daily PDF rate sheet from a public financial website. This script ran perfectly for months but suddenly stopped working around April 9th.

Instead of returning the application/pdf file, UrlFetchApp is now returning a text/html file.

Here is my minimal reproducible code:

function testDownload() { var url = "https://www.sbp.org.pk/ecodata/rates/war/2026/May/05-May-26.pdf"; var options = { muteHttpExceptions: true, headers: { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" } }; var response = UrlFetchApp.fetch(url, options); Logger.log("MimeType: " + response.getBlob().getContentType()); Logger.log("Content: " + response.getContentText().substring(0, 300)); }

The Output / Error: The MimeType logs as text/html. The getContentText() returns this Cloudflare block page:

<html><body> <h1>Access Denied</h1><p>Signature ID: 1002000251<br>Message ID: 2360628808<br>Client IP: 104.22.1.181</p> <script defer src="https://static.cloudflareinsights.com/beacon.min.js..."

My assessment so far: It appears the host has placed their site behind a Cloudflare Web Application Firewall (WAF). Because UrlFetchApp runs from Google's Data Center IPs, Cloudflare flags it as a bot and blocks the request before it even reaches the file.

My Questions:

Is there any native way to bypass this Cloudflare block strictly using UrlFetchApp parameters (like specific headers or cookies)? If native bypass is impossible due to Google's IPs being blacklisted by the WAF, what is the best recommended architecture to grab this PDF? Do I need to route this through a scraping API or a proxy?
Read Entire Article