Skip to main content
NYC Permit Pulse uses a custom coordinate projection system to map real-world latitude/longitude coordinates onto the pixel-art isometric canvas from isometric.nyc.

Overview

The projection converts GPS coordinates to image pixel positions using:
  1. Lat/lng → meters - Haversine approximation for local NYC area
  2. Rotation - Camera azimuth angle (-15°)
  3. Elevation projection - Camera elevation angle (-45°)
  4. Pixel scaling - Meters per pixel ratio (0.293 m/px)
  5. Quadrant conversion - 512px quadrants to match tile system

Generation Config

The isometric map was generated with these parameters (from generation_config.json):
export const MAP_CONFIG: MapConfig = {
  seed: { lat: 40.7484, lng: -73.9857 },  // ~Empire State Building
  camera_azimuth_degrees: -15,
  camera_elevation_degrees: -45,
  width_px: 1024,
  height_px: 1024,
  view_height_meters: 300,
  tile_step: 0.5,
};
Assembled image dimensions: 123904 × 100864 px = 242 × 197 quadrants at 512px each

Calibration

The projection was calibrated using 15 ground-truth points across all 5 NYC boroughs plus New Jersey. Each point was:
  1. Clicked on the map using OpenSeadragon’s window.__osd debug utility
  2. Tagged with precise lat/lng from Google Maps
  3. Used in a least-squares solver to determine the seed pixel position

Calibration Results

// Seed pixel position from 15-point least-squares fit
export const SEED_PX = { x: 45059, y: 43479 };
This agrees well with the tile metadata origin: (-87, -84) quadrants → (44544, 43008) pixels.
Accuracy:
  • RMS residual: 28px (~8 meters) across full NYC metro area
  • Max error: 63px (~18 meters)
  • Projection: Isotropic - meters per pixel X ≈ meters per pixel Y ≈ 0.293 m/px

Projection Formula

The latlngToQuadrantCoords() function implements the exact algorithm from isometric-nyc/generation/shared.py:
export function latlngToQuadrantCoords(
  config: MapConfig,
  lat: number,
  lng: number
): { qx: number; qy: number } {
  const {
    seed,
    camera_azimuth_degrees,
    camera_elevation_degrees,
    width_px,
    height_px,
    view_height_meters,
    tile_step,
  } = config;

  const metersPerPixel = view_height_meters / height_px;

  // 1. Convert lat/lng difference to meters
  const deltaNorthMeters = (lat - seed.lat) * 111111.0;
  const deltaEastMeters =
    (lng - seed.lng) * 111111.0 * Math.cos((seed.lat * Math.PI) / 180);

  // 2. Inverse rotation by azimuth
  const azimuthRad = (camera_azimuth_degrees * Math.PI) / 180;
  const cosA = Math.cos(azimuthRad);
  const sinA = Math.sin(azimuthRad);

  const deltaRotX = deltaEastMeters * cosA - deltaNorthMeters * sinA;
  const deltaRotY = deltaEastMeters * sinA + deltaNorthMeters * cosA;

  // 3. Convert to pixel shifts
  const elevRad = (camera_elevation_degrees * Math.PI) / 180;
  const sinElev = Math.sin(elevRad);

  const shiftRightMeters = deltaRotX;
  const shiftUpMeters = -deltaRotY * sinElev;

  const shiftXPx = shiftRightMeters / metersPerPixel;
  const shiftYPx = shiftUpMeters / metersPerPixel;

  // 4. Convert to quadrant coordinates
  const quadrantStepPx = width_px * tile_step; // 512

  const qx = shiftXPx / quadrantStepPx;
  const qy = -shiftYPx / quadrantStepPx; // y increases downward

  return { qx, qy };
}

Step-by-Step Breakdown

1

Lat/Lng to Meters

Use the Haversine approximation to convert coordinate deltas to meters:
deltaNorth = (lat - seedLat) * 111111.0  // ~111km per degree latitude
deltaEast = (lng - seedLng) * 111111.0 * cos(seedLat)  // adjusted for longitude
2

Rotate by Camera Azimuth

Apply inverse rotation matrix for -15° azimuth:
rotX = deltaEast * cos(azimuth) - deltaNorth * sin(azimuth)
rotY = deltaEast * sin(azimuth) + deltaNorth * cos(azimuth)
3

Project by Camera Elevation

Apply -45° elevation to vertical axis:
shiftRight = rotX
shiftUp = -rotY * sin(elevation)
4

Convert to Pixels

Scale meters to pixels using metersPerPixel = 300m / 1024px ≈ 0.293 m/px:
shiftXPx = shiftRight / metersPerPixel
shiftYPx = shiftUp / metersPerPixel
5

Convert to Quadrants

Divide by quadrant size (512px) to get relative quadrant coordinates:
qx = shiftXPx / 512
qy = -shiftYPx / 512  // flip y-axis (OpenSeadragon uses top-left origin)

Image Pixel Conversion

The public API latlngToImagePx() adds the calibrated seed pixel offset:
export const IMAGE_DIMS = { width: 123904, height: 100864 };

export function latlngToImagePx(lat: number, lng: number): { x: number; y: number } {
  const { qx, qy } = latlngToQuadrantCoords(MAP_CONFIG, lat, lng);
  return {
    x: SEED_PX.x + qx * 512,
    y: SEED_PX.y + qy * 512,
  };
}

Example

Empire State Building: (40.7484, -73.9857) (the seed point)
const { x, y } = latlngToImagePx(40.7484, -73.9857);
// { x: 45059, y: 43479 }  — exactly the seed pixel
Times Square: (40.7580, -73.9855)
const { x, y } = latlngToImagePx(40.7580, -73.9855);
// { x: 45237, y: 43064 }  — ~415px north-northeast

OpenSeadragon Viewport Coordinates

Critical: OpenSeadragon uses image width as the unit for both X and Y axes.When converting image pixels to OSD viewport points:
// ✓ CORRECT
const vpX = imgX / IMAGE_DIMS.width;   // divide by 123904
const vpY = imgY / IMAGE_DIMS.width;   // divide by 123904 (width, not height!)

// ✗ WRONG
const vpY = imgY / IMAGE_DIMS.height;  // causes ~22% downward offset
This is because height (100864) ≠ width (123904). Dividing Y by height compresses the vertical axis.

Placing Markers

The placeMarkers() function converts lat/lng to viewport coordinates and adds overlays:
filteredPermits.forEach(permit => {
  const lat = parseFloat(permit.latitude ?? '');
  const lng = parseFloat(permit.longitude ?? '');
  
  // 1. Convert to image pixels
  const { x: imageX, y: imageY } = latlngToImagePx(lat, lng);
  
  // 2. Convert to viewport coordinates (BOTH divided by width!)
  const vpX = imageX / IMAGE_DIMS.width;
  const vpY = imageY / IMAGE_DIMS.width;
  
  // 3. Add OpenSeadragon overlay
  viewer.addOverlay({
    element: markerElement,
    location: new OpenSeadragon.Point(vpX, vpY),
    placement: OpenSeadragon.Placement.CENTER,
  });
});

Bounds Checking

Not all NYC coordinates fall within the rendered map area. We filter out markers outside the image bounds:
const { x: imgX, y: imgY } = latlngToImagePx(lat, lng);

if (imgX < 0 || imgX > IMAGE_DIMS.width || imgY < 0 || imgY > IMAGE_DIMS.height) {
  return; // Skip this marker
}
This is especially important for:
  • Western New Jersey - west of the map edge
  • Far Rockaway - south of the map edge
  • Northern Bronx - potentially north of the map edge

Accuracy Validation

We validated the projection accuracy by comparing:
  1. Manual markers - Clicked locations on the map
  2. Google Maps coords - Precise lat/lng for the same landmarks
  3. Projected positions - Using latlngToImagePx()

Sample Validation Points

LocationManual PixelProjected PixelError (px)
Empire State Building(45059, 43479)(45059, 43479)0 (seed)
Times Square(45240, 43070)(45237, 43064)6.7
Brooklyn Bridge(46120, 44580)(46095, 44552)33.5
Yankee Stadium(44820, 40950)(44785, 40924)41.4
Staten Island Ferry(45980, 46210)(45945, 46173)50.3
All errors are well below the 63px maximum and within the 28px RMS average.

Mathematical Derivation

The projection formula is derived from the inverse of the image generation process:
  1. World space - Real-world GPS coordinates in meters
  2. Camera rotation - Rotate horizontal plane by azimuth angle
  3. Camera projection - Apply elevation angle (isometric = -45°)
  4. Screen space - Convert to 2D pixel canvas
The isometric projection preserves:
  • Parallel lines - Remain parallel (no perspective distortion)
  • Isotropy - Equal scaling in both axes
  • Angles - Preserved within horizontal plane

Performance Considerations

The projection is fast enough to run on every permit (5,000+ times per render):
// Benchmark: ~0.001ms per call on modern hardware
const start = performance.now();
for (let i = 0; i < 5000; i++) {
  latlngToImagePx(40.7580, -73.9855);
}
console.log(`${performance.now() - start}ms total`); // ~5-10ms
For comparison, the OpenSeadragon overlay addition (addOverlay) takes ~0.1ms per marker, making it the bottleneck (which is why we chunk it).

Next Steps

Architecture

Understand how the projection integrates with the app architecture

Data Sources

Learn about the lat/lng data from NYC Open Data

Build docs developers (and LLMs) love