How I Scraped Nearly 2,000 Bellingham School District Staff Emails
Jamisen Renoud
May 11th, 2025
Note: This is a completely public endpoint. No login or security was bypassed.
I was snooping around the Bellingham Public School's Website and went to the contact page.
When I looked in the network tab on dev tools, I saw every time you scrolled for more results on the page, It would send a request to https://bellinghamschools.org/wp-content/themes/schoolsites/directoryHandler.php?numPosts=9&pageNumber=1
, and every time increased the page number by 1.

This got me thinking, upon going to https://bellinghamschools.org/wp-content/themes/schoolsites/directoryHandler.php?numPosts=9&pageNumber=1
, you are greeted by a no css page, that just has photos and info, and it has the persons photo, name, phone number, and other stuff.
At the end of all that is a mailto herf with their email. So I created a simple Python script, the first issue was that I was manually taking every page 1 by 1 up to 193 pages (I would get rate limited in like a sec) so I just used ?numPosts=9999999999&pageNumber=1
and got everything at once.
I ended up with this code
import requests
from bs4 import BeautifulSoup
URL = "https://bellinghamschools.org/wp-content/themes/schoolsites/directoryHandler.php?numPosts=9999999999&pageNumber=1"
HEADERS = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/117.0"
}
try:
r = requests.get(URL, headers=HEADERS, timeout=15)
r.raise_for_status()
soup = BeautifulSoup(r.text, "html.parser")
emails = {
a["href"].replace("mailto:", "").strip()
for a in soup.find_all("a", href=True)
if a["href"].startswith("mailto:")
}
with open("emails.txt", "w") as f:
for email in sorted(emails):
f.write(email + "\n")
print(f"[Yippe] Done. {len(emails)} emails saved to emails.txt.")
except Exception as e:
print(f"[!] Failed: {e}")
It just sends 1 request, and gets all the emails!!
It makes a file with 1 email per line, when I ran it today I got 1732 emails!
[Yippe] Done. 1732 emails saved to emails.txt.
This post is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.