Environmental risk is moving from a disclosure checkbox to a core underwriting variable. The EU Taxonomy, SFDR, and the ECB's climate stress-testing framework all push asset managers, lenders, and insurers toward quantifying physical environmental risk at the asset level — not just the country or sector level.
Air quality is one of the most tractable components: the data exists, it's validated, it covers Europe from 2013 to present (global from 2020), and it can be queried programmatically for any coordinate in a few lines of Python. Here's how to build a defensible location-level air quality risk score using the Jiskta SDK.
Why air quality matters for ESG scoring
Air pollution creates direct financial risk in several asset classes:
- Real estate: properties in high-pollution zones face regulatory risk (clean air zones, diesel bans), health-related tenant churn, and potential haircuts in future valuations
- Infrastructure: logistics assets and warehouses near motorways carry long-term NO₂ exposure risk under EU Air Quality Directive tightening
- Lending/insurance: workplace exposure liability is growing as WHO guidelines diverge sharply from EU legal limits
- Supply chain: manufacturing facilities in high-PM2.5 zones face upcoming permit risk as IPPC/IED revisions tighten emission standards
The scoring methodology
We'll build a score from three inputs:
- 5-year mean NO₂ — chronic exposure signal, traffic-dominated
- 5-year mean PM2.5 — broad health burden, includes industrial and heating sources
- Trend (OLS slope) — is air quality improving or deteriorating? A site improving at −2 µg/m³/year is fundamentally different from one that's flat despite policy pressure
Each component is scored 0–100 relative to EU thresholds and WHO guidelines, then combined into an A–D letter grade.
The code
from jiskta import JisktaClient
import pandas as pd
client = JisktaClient("your_key")
def air_quality_score(lat: float, lon: float) -> dict:
# 1. Pull 5-year monthly means for NO₂ and PM2.5 (2020–2024)
df_no2 = client.query(lat=lat, lon=lon,
start="2020-01", end="2024-12",
variables=["no2"], aggregate="monthly")
df_pm25 = client.query(lat=lat, lon=lon,
start="2020-01", end="2024-12",
variables=["pm2p5"], aggregate="monthly")
no2_mean = df_no2["no2_mean"].mean()
pm25_mean = df_pm25["pm2p5_mean"].mean()
# 2. Pull long-term trend (2013–2024 OLS slope, µg/m³/year)
df_trend = client.query(lat=lat, lon=lon,
start="2013-01", end="2024-12",
variables=["no2"], aggregate="trend")
no2_slope = df_trend["slope"].iloc[0]
# 3. Score each component (100 = best, 0 = worst)
# EU legal limit NO₂: 40 µg/m³ | WHO guideline: 10 µg/m³
no2_score = max(0, min(100, 100 * (1 - (no2_mean - 5) / (40 - 5))))
# EU legal limit PM2.5: 25 µg/m³ | WHO guideline: 5 µg/m³
pm25_score = max(0, min(100, 100 * (1 - (pm25_mean - 3) / (25 - 3))))
# Trend: −3 µg/m³/yr → 100 (improving fast), +1 µg/m³/yr → 0 (worsening)
trend_score = max(0, min(100, 100 * (-no2_slope + 1) / 4))
composite = no2_score * 0.40 + pm25_score * 0.40 + trend_score * 0.20
grade = ("A" if composite >= 75
else "B" if composite >= 55
else "C" if composite >= 35
else "D")
return {
"grade": grade, "score": round(composite, 1),
"no2_mean": round(no2_mean, 1), "pm25_mean": round(pm25_mean, 1),
"no2_slope": round(no2_slope, 2),
} Example output
Running this for four different asset locations (data verified March 2026):
Extending the model
The example above uses 3 SDK calls per asset: NO₂ monthly means (2020–2024, 60 months),
PM2.5 monthly means (60 months), and the NO₂ trend (2013–2024, 144 months).
For a point query, credits are charged as 1 tile × months × 1 variable:
60 + 60 + 144 = 264 credits per asset.
Scoring 100 assets costs roughly 26,400 credits — about €28 at Starter pricing.
For a portfolio of 1,000 assets, you're looking at roughly €280.
A few obvious extensions:
- Add ozone (O₃): relevant for outdoor-air-exposed assets; add
o3to the variables list - Exceedance hours: use
aggregate=exceedance&threshold=25to count hours above the EU 24h PM2.5 limit — a more granular risk metric than annual means - Correlate with ERA5: boundary layer height predicts local dispersion capacity; a site with low BLH has inherently worse dispersion and higher chronic exposure risk regardless of source intensity
- Scenario delta: compare 2013–2017 vs 2019–2023 means to assess whether a location's trajectory is accelerating or stalling
Scaling to a portfolio
import concurrent.futures
portfolio = [
{"id": "ASSET-001", "lat": 48.85, "lon": 2.35}, # Paris HQ
{"id": "ASSET-002", "lat": 52.52, "lon": 13.40}, # Berlin logistics hub
{"id": "ASSET-003", "lat": 50.06, "lon": 19.94}, # Kraków retail
# ... up to thousands of assets
]
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as pool:
futures = {
pool.submit(air_quality_score, a["lat"], a["lon"]): a["id"]
for a in portfolio
}
results = {
asset_id: f.result()
for f, asset_id in futures.items()
}
df_out = pd.DataFrame(results).T
df_out.to_csv("portfolio_air_quality_scores.csv")
print(df_out[["grade", "score", "no2_mean", "pm25_mean", "no2_slope"]]) With 10 parallel workers, scoring 100 assets takes under 30 seconds. The output CSV is ready to drop into your existing ESG reporting workflow.
Ready to score your portfolio?
Sign up for free — 500 credits included, enough to score 60+ assets with no card required.
Get started free →