Google Search URL Parameters: Complete Reference Guide
every Google search generates a URL with specific parameters that control what results you see. understanding these parameters is essential for SEO research, SERP scraping, building search tools, and debugging how Google interprets queries.
this reference covers every useful Google search URL parameter, organized by function, with practical examples for each.
How Google Search URLs Work
a Google search URL follows this structure:
https://www.google.com/search?q=web+scraping&num=10&hl=en&gl=us
the base URL is https://www.google.com/search and everything after the ? is a parameter. parameters are separated by &. the q parameter is required and contains the search query.
Core Search Parameters
q (Query)
the search query. this is the only required parameter.
https://www.google.com/search?q=proxy+server+setup
special operators work within the query:
| Operator | Example | Description |
|---|---|---|
"exact phrase" | q="web+scraping+tools" | exact phrase match |
site: | q=site:reddit.com+proxies | restrict to specific domain |
- | q=proxy+-free | exclude a word |
OR | q=proxy+OR+vpn | match either term |
filetype: | q=proxy+filetype:pdf | specific file types |
intitle: | q=intitle:proxy+guide | word must be in title |
inurl: | q=inurl:proxy | word must be in URL |
intext: | q=intext:residential+proxy | word must be in body text |
related: | q=related:brightdata.com | find similar sites |
cache: | q=cache:example.com | view Google’s cached version |
num (Number of Results)
controls how many results appear per page. default is 10. maximum is 100.
https://www.google.com/search?q=web+scraping&num=100
useful values: 10, 20, 50, 100. using num=100 reduces the number of pagination requests you need to make when scraping SERPs.
start (Pagination Offset)
the starting position for results. used for pagination.
# page 1 (results 1-10)
https://www.google.com/search?q=web+scraping&start=0
# page 2 (results 11-20)
https://www.google.com/search?q=web+scraping&start=10
# page 3 (results 21-30)
https://www.google.com/search?q=web+scraping&start=20
combine with num for efficient pagination:
# get results 1-100
https://www.google.com/search?q=web+scraping&num=100&start=0
# get results 101-200
https://www.google.com/search?q=web+scraping&num=100&start=100
Language and Location Parameters
hl (Host Language)
sets the interface language. this affects the language of Google’s UI elements but not necessarily the search results.
https://www.google.com/search?q=proxy&hl=en # English interface
https://www.google.com/search?q=proxy&hl=ja # Japanese interface
https://www.google.com/search?q=proxy&hl=de # German interface
https://www.google.com/search?q=proxy&hl=zh-CN # Simplified Chinese
common language codes:
| Code | Language |
|---|---|
| en | English |
| es | Spanish |
| fr | French |
| de | German |
| ja | Japanese |
| ko | Korean |
| zh-CN | Simplified Chinese |
| zh-TW | Traditional Chinese |
| pt-BR | Brazilian Portuguese |
| ru | Russian |
| ar | Arabic |
| hi | Hindi |
gl (Geolocation)
sets the country for localizing results. this is the most important parameter for geo-targeted SERP analysis.
https://www.google.com/search?q=best+proxy&gl=us # US results
https://www.google.com/search?q=best+proxy&gl=gb # UK results
https://www.google.com/search?q=best+proxy&gl=sg # Singapore results
https://www.google.com/search?q=best+proxy&gl=de # Germany results
uses ISO 3166-1 alpha-2 country codes. this parameter is critical for SEO professionals who need to see how rankings differ by country.
cr (Country Restrict)
restricts results to pages from a specific country. different from gl because it filters the actual results, not just the localization.
# only show results from UK sites
https://www.google.com/search?q=proxy+service&cr=countryUK
# only show results from Japan
https://www.google.com/search?q=proxy+service&cr=countryJP
the format is country followed by the two-letter country code in uppercase.
lr (Language Restrict)
restricts results to pages in a specific language.
# only English results
https://www.google.com/search?q=proxy&lr=lang_en
# only Japanese results
https://www.google.com/search?q=proxy&lr=lang_ja
# English or French results
https://www.google.com/search?q=proxy&lr=lang_en|lang_fr
Time and Date Parameters
tbs (Time-Based Search)
filters results by time period. this parameter uses a complex encoding.
# past hour
https://www.google.com/search?q=proxy+news&tbs=qdr:h
# past 24 hours
https://www.google.com/search?q=proxy+news&tbs=qdr:d
# past week
https://www.google.com/search?q=proxy+news&tbs=qdr:w
# past month
https://www.google.com/search?q=proxy+news&tbs=qdr:m
# past year
https://www.google.com/search?q=proxy+news&tbs=qdr:y
# past N minutes
https://www.google.com/search?q=proxy+news&tbs=qdr:n15 # past 15 minutes
# past N hours
https://www.google.com/search?q=proxy+news&tbs=qdr:h6 # past 6 hours
# past N days
https://www.google.com/search?q=proxy+news&tbs=qdr:d3 # past 3 days
Custom Date Range
for a specific date range, use the cd_min and cd_max format within tbs:
# results from January 2025 to March 2025
https://www.google.com/search?q=proxy+market&tbs=cdr:1,cd_min:01/01/2025,cd_max:03/31/2025
Content Type Parameters
tbm (Type of Search)
switches between different Google search types.
# web search (default)
https://www.google.com/search?q=proxy&tbm=
# image search
https://www.google.com/search?q=proxy+diagram&tbm=isch
# video search
https://www.google.com/search?q=proxy+tutorial&tbm=vid
# news search
https://www.google.com/search?q=proxy+regulation&tbm=nws
# shopping search
https://www.google.com/search?q=proxy+server&tbm=shop
# books search
https://www.google.com/search?q=web+scraping&tbm=bks
Display and Formatting Parameters
safe (SafeSearch)
controls SafeSearch filtering.
# SafeSearch off
https://www.google.com/search?q=proxy&safe=off
# SafeSearch on
https://www.google.com/search?q=proxy&safe=active
filter (Duplicate Filter)
controls whether Google filters similar/duplicate results.
# show all results including duplicates
https://www.google.com/search?q=proxy&filter=0
# filter duplicates (default)
https://www.google.com/search?q=proxy&filter=1
setting filter=0 is useful for SERP scraping when you want to see every result Google has indexed.
nfpr (No Auto-Correction)
prevents Google from auto-correcting your query.
# prevent spelling correction
https://www.google.com/search?q=proxi+servr&nfpr=1
useful when searching for specific misspellings or technical terms that Google might try to correct.
Advanced and Less-Known Parameters
as_sitesearch (Site Search)
restricts results to a specific site. equivalent to the site: operator but as a parameter.
https://www.google.com/search?q=proxy+guide&as_sitesearch=reddit.com
as_qdr (Date Range)
alternative date range parameter.
https://www.google.com/search?q=proxy&as_qdr=m6 # past 6 months
as_epq (Exact Phrase)
equivalent to wrapping the query in quotes.
https://www.google.com/search?q=proxy&as_epq=residential+proxy
as_oq (OR Terms)
adds OR terms to the query.
# search for proxy OR vpn OR tunnel
https://www.google.com/search?q=setup+guide&as_oq=proxy+vpn+tunnel
as_eq (Exclude Terms)
excludes specific terms.
# search for proxy but exclude "free"
https://www.google.com/search?q=proxy&as_eq=free
pws (Personalized Results)
disables personalized search results.
# no personalization
https://www.google.com/search?q=proxy&pws=0
important for SEO research where you want objective rankings, not results influenced by your search history.
Building SERP Scraping URLs with Python
here’s how to construct Google search URLs programmatically for SERP monitoring and SEO research.
from urllib.parse import urlencode, quote_plus
def build_google_url(
query,
num=10,
start=0,
lang="en",
country="us",
time_range=None,
search_type=None,
safe="off",
no_personalization=True,
no_filter=False,
exact_phrase=None,
exclude_terms=None,
site=None,
):
"""build a Google search URL with specified parameters"""
params = {
"q": query,
"num": num,
"start": start,
"hl": lang,
"gl": country,
"safe": safe,
}
if no_personalization:
params["pws"] = 0
if no_filter:
params["filter"] = 0
if time_range:
time_codes = {
"hour": "qdr:h",
"day": "qdr:d",
"week": "qdr:w",
"month": "qdr:m",
"year": "qdr:y",
}
if time_range in time_codes:
params["tbs"] = time_codes[time_range]
if search_type:
type_codes = {
"images": "isch",
"videos": "vid",
"news": "nws",
"shopping": "shop",
"books": "bks",
}
if search_type in type_codes:
params["tbm"] = type_codes[search_type]
if exact_phrase:
params["as_epq"] = exact_phrase
if exclude_terms:
params["as_eq"] = exclude_terms
if site:
params["as_sitesearch"] = site
return f"https://www.google.com/search?{urlencode(params)}"
# examples
print(build_google_url("web scraping tools", num=100, country="us"))
print(build_google_url("proxy service", time_range="month", country="gb"))
print(build_google_url("best proxy", search_type="news", country="de"))
print(build_google_url("proxy guide", site="reddit.com", num=50))
Scraping Google SERPs with Proper URL Parameters
when scraping Google search results, the right URL parameters make a significant difference in data quality.
from curl_cffi import requests
from bs4 import BeautifulSoup
import time
import random
class GoogleSerpScraper:
def __init__(self, proxy=None):
self.session = requests.Session(impersonate="chrome124")
self.proxy = {"http": proxy, "https": proxy} if proxy else None
def search(self, query, pages=1, country="us", lang="en"):
"""scrape Google search results"""
all_results = []
for page in range(pages):
url = build_google_url(
query=query,
num=10,
start=page * 10,
country=country,
lang=lang,
no_personalization=True,
)
response = self.session.get(
url,
headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": f"{lang};q=0.9,en;q=0.8",
},
proxies=self.proxy,
)
if response.status_code == 200:
results = self.parse_results(response.text)
all_results.extend(results)
else:
print(f"page {page + 1}: status {response.status_code}")
# delay between pages
time.sleep(random.uniform(5, 12))
return all_results
def parse_results(self, html):
"""parse organic results from Google SERP HTML"""
soup = BeautifulSoup(html, "html.parser")
results = []
for div in soup.select("div.g"):
title_el = div.select_one("h3")
link_el = div.select_one("a")
snippet_el = div.select_one("div.VwiC3b")
if title_el and link_el:
results.append({
"title": title_el.get_text(),
"url": link_el.get("href", ""),
"snippet": snippet_el.get_text() if snippet_el else "",
})
return results
# usage with residential proxy
scraper = GoogleSerpScraper(proxy="http://user:pass@residential.proxy.com:port")
results = scraper.search("best proxy providers 2026", pages=3, country="us")
for i, r in enumerate(results, 1):
print(f"{i}. {r['title']}")
print(f" {r['url']}")
when scraping Google, always use residential proxies. Google blocks datacenter IPs aggressively. compare pricing across providers with the Proxy Cost Calculator on dataresearchtools.com.
Parameter Combinations for Common Use Cases
SEO Competitor Analysis
# see what ranks in US for your target keyword, no personalization
?q=residential+proxy+provider&gl=us&hl=en&num=100&pws=0&filter=0
Content Research
# find recent articles about a topic
?q=web+scraping+best+practices&tbs=qdr:m3&num=50&gl=us
Backlink Prospecting
# find sites linking to competitors
?q="competitor.com"+-site:competitor.com&num=100&filter=0
News Monitoring
# latest news about proxies
?q=proxy+regulation&tbm=nws&tbs=qdr:d&gl=us&hl=en
Finding Guest Post Opportunities
# find blogs accepting guest posts in your niche
?q=proxy+"write+for+us"+OR+"guest+post"&num=100
Technical Resource Discovery
# find PDFs and technical docs
?q=proxy+architecture+filetype:pdf&num=50
Parameters That No Longer Work
Google has deprecated several parameters over the years:
ieandoe(input/output encoding) – Google now uses UTF-8 everywherebtnI(I’m Feeling Lucky) – still technically works but less useful for scrapingas_rights(Creative Commons filter) – removed from web searchcd(result number) – internal parameter, not user-controllable
Summary
Google search URL parameters give you precise control over search results for SEO research, SERP monitoring, and data collection. the most important parameters to remember are:
qfor the query,numfor result count,startfor paginationglfor country targeting,hlfor languagetbs=qdr:for time filteringtbmfor search type (images, news, videos)pws=0to disable personalizationfilter=0to show all results including duplicates
combine these parameters strategically to get exactly the data you need from Google’s SERPs, whether you’re monitoring rankings, researching competitors, or building search-powered tools.