Download historical settlement files, I90 data, and market results with automatic caching
Archives are downloadable files published by ESIOS, including I90 balance files, settlement data, market results, and more. The ArchivesManager handles listing, configuration, and downloads with automatic date-range iteration.
with ESIOSClient() as client: # Use local catalog (153 archives) archives = client.archives.list(source="local") print(archives.head()) # Or fetch from API (only ~24 archives) archives = client.archives.list(source="api")
The local catalog includes all 153 archives, including I90 files, settlements, and market results. The API only returns a subset.
# Download file for data date 2025-01-15files = client.archives.download(1, date="2025-01-15", date_type="datos")# Download file published on 2025-01-16files = client.archives.download(1, date="2025-01-16", date_type="publicacion")
Before downloading, archives must be configured with date parameters. This resolves the download URL:
handle = client.archives.get(1)# Configure for a single datehandle.configure(date="2025-01-15", date_type="datos")# Or configure for a date rangehandle.configure(start="2025-01-01", end="2025-01-31", date_type="datos")# Access resolved metadataprint(handle.name) # "I90DIA_20250115"print(handle._download_url) # Full S3 URL
From src/esios/managers/archives.py:46-69:
def configure( self, *, date: str | None = None, start: str | None = None, end: str | None = None, date_type: str = "datos", locale: str = "es",) -> None: """Configure archive parameters and resolve the download URL.""" params: dict[str, str] = {"date_type": date_type, "locale": locale} if date: params["date"] = date + "T00:00:00" elif start and end: params["start_date"] = start + "T00:00:00" params["end_date"] = end + "T23:59:59" else: raise ValueError("Provide either 'date', or both 'start' and 'end'.") response = self._manager._get(f"archives/{self.id}", params=params) self.metadata = response download = self.metadata["archive"]["download"] self.name = download["name"] self._download_url = ESIOS_API_URL + download["url"]
The download() convenience method calls configure() automatically. You rarely need to call it manually.
with ESIOSClient() as client: # First call: downloads from API files = client.archives.download(1, date="2025-01-15") # Second call: uses cache (instant) files = client.archives.download(1, date="2025-01-15") # Cache hit: ~/.cache/esios/archives/1/I90DIA_20250115
Archive caching is automatic and has no TTL. Files are cached indefinitely unless manually cleared.
@staticmethoddef _copy_to_output(cache_folder: Path, output_dir: Path) -> None: """Copy cached files to a user-specified output directory.""" dest = output_dir / cache_folder.name if dest.exists() and any(dest.iterdir()): logger.info("Output already exists: %s", dest) return dest.mkdir(parents=True, exist_ok=True) for src_file in cache_folder.iterdir(): if src_file.is_file(): shutil.copy2(src_file, dest / src_file.name) logger.info("Copied to %s", dest)
Files remain in the cache even when copied to output_dir. Subsequent downloads reuse the cache.
with ESIOSClient() as client: files = client.archives.download( archive_id=1, start="2020-01-01", # Some dates may not exist end="2025-01-31", ) # Failed dates are logged and skipped # Returns all successfully downloaded files
From src/esios/managers/archives.py:128-134:
try: self.configure(start=s, end=e, date_type=date_type) content = self._manager._client.download(self._download_url)except (APIResponseError, Exception) as exc: logger.warning("Failed to download %s to %s: %s — skipping.", s, e, exc) current = chunk_end + timedelta(days=1) continue