Endpoint
Downloads the complete vehicle registration database as a CSV file. This endpoint provides a full export of all records stored in data/registros.csv.
Request Parameters
This endpoint accepts no parameters.
Response
The endpoint returns the CSV file as a downloadable attachment.
Status Code : 200 OK
Content-Type : text/csv; charset=utf-8
Content-Disposition : attachment; filename="registros.csv"
CSV Structure
The downloaded CSV file contains the following columns:
Unique identifier for each record (auto-incremented)
Timestamp when the record was created, format: YYYY-MM-DD HH:MM:SS
Extracted license plate number (uppercase, alphanumeric + hyphens, max 10 characters)
Name of the vehicle owner (may be empty)
Type of vehicle (e.g., “Sedan”, “SUV”, “Motorcycle”, may be empty)
Additional notes or observations about the record (may be empty)
Filename of the uploaded image stored in the uploads/ directory
Source Code Implementation
@app.route ( "/descargar" )
def descargar ():
ensure_csv()
return send_file( DATA_PATH , as_attachment = True , download_name = "registros.csv" )
The endpoint uses Flask’s send_file() function with:
DATA_PATH: Path to data/registros.csv
as_attachment=True: Forces browser download instead of display
download_name="registros.csv": Specifies the filename for download
Request Examples
cURL
Python Requests
JavaScript Fetch
wget
Browser
# Download to file
curl -O http://localhost:5000/descargar
# Or specify output filename
curl -o my_records.csv http://localhost:5000/descargar
# View in terminal
curl http://localhost:5000/descargar
import requests
url = "http://localhost:5000/descargar"
response = requests.get(url)
if response.status_code == 200 :
# Save to file
with open ( 'registros.csv' , 'wb' ) as f:
f.write(response.content)
print ( "CSV downloaded successfully" )
# Or process directly
import csv
from io import StringIO
csv_data = response.text
reader = csv.DictReader(StringIO(csv_data))
records = list (reader)
for record in records:
print ( f "ID: { record[ 'id' ] } , Plate: { record[ 'matricula' ] } " )
fetch ( 'http://localhost:5000/descargar' )
. then ( response => response . blob ())
. then ( blob => {
// Create download link
const url = window . URL . createObjectURL ( blob );
const a = document . createElement ( 'a' );
a . href = url ;
a . download = 'registros.csv' ;
document . body . appendChild ( a );
a . click ();
a . remove ();
window . URL . revokeObjectURL ( url );
})
. catch ( error => console . error ( 'Error:' , error ));
# Download with wget
wget http://localhost:5000/descargar -O registros.csv
# Or use default filename
wget http://localhost:5000/descargar
Simply navigate to: http://localhost:5000/descargar
The browser will automatically download the file.
Sample CSV Output
id, fecha_hora, matricula, propietario, tipo_vehiculo, observacion, imagen
1, 2026-03-04 15:30:45, ABC1234, Juan Pérez, Sedan, Vehículo de entrega, matricula_20260304_153045.jpg
2, 2026-03-04 16:15:22, XYZ5678, María García, SUV, Acceso autorizado, matricula_20260304_161522.jpg
3, 2026-03-04 17:20:10, DEF9012, Carlos López, Motorcycle, , matricula_20260304_172010.jpg
4, 2026-03-04 18:45:33, GHI3456, Ana Martínez, Truck, Proveedor frecuente, matricula_20260304_184533.jpg
5, 2026-03-04 19:12:08, NO_DETECTADA, Pedro Sánchez, Sedan, Placa ilegible, matricula_20260304_191208.jpg
Use Cases
1. Data Backup
Regularly download the CSV to create backups:
#!/bin/bash
# Daily backup script
DATE = $( date +%Y%m%d )
curl -o "backup_${ DATE }.csv" http://localhost:5000/descargar
2. Data Analysis
Import the CSV into analysis tools:
import pandas as pd
import requests
from io import StringIO
response = requests.get( 'http://localhost:5000/descargar' )
df = pd.read_csv(StringIO(response.text))
# Analyze data
print ( f "Total records: { len (df) } " )
print ( f "Unique plates: { df[ 'matricula' ].nunique() } " )
print ( f "Most common vehicle type: { df[ 'tipo_vehiculo' ].mode()[ 0 ] } " )
# Group by date
df[ 'fecha' ] = pd.to_datetime(df[ 'fecha_hora' ]).dt.date
daily_counts = df.groupby( 'fecha' ).size()
print ( " \n Records per day:" )
print (daily_counts)
3. Report Generation
Generate reports from the exported data:
import csv
import requests
from io import StringIO
response = requests.get( 'http://localhost:5000/descargar' )
reader = csv.DictReader(StringIO(response.text))
# Generate summary report
records = list (reader)
print ( f "=== VEHICLE REGISTRATION REPORT ===" )
print ( f "Total Vehicles: { len (records) } " )
print ( f " \n Recent Entries:" )
for record in records[ - 5 :]:
print ( f " { record[ 'fecha_hora' ] } : { record[ 'matricula' ] } - { record[ 'propietario' ] } " )
4. Database Migration
Export data for migration to another system:
import requests
import csv
from io import StringIO
import sqlite3
# Download CSV
response = requests.get( 'http://localhost:5000/descargar' )
reader = csv.DictReader(StringIO(response.text))
# Import to SQLite
conn = sqlite3.connect( 'registros.db' )
cursor = conn.cursor()
cursor.execute( '''
CREATE TABLE IF NOT EXISTS registros (
id INTEGER PRIMARY KEY,
fecha_hora TEXT,
matricula TEXT,
propietario TEXT,
tipo_vehiculo TEXT,
observacion TEXT,
imagen TEXT
)
''' )
for record in reader:
cursor.execute( '''
INSERT INTO registros VALUES (?, ?, ?, ?, ?, ?, ?)
''' , (record[ 'id' ], record[ 'fecha_hora' ], record[ 'matricula' ],
record[ 'propietario' ], record[ 'tipo_vehiculo' ],
record[ 'observacion' ], record[ 'imagen' ]))
conn.commit()
conn.close()
print ( "Data imported to SQLite successfully" )
File Initialization
The ensure_csv() function is called before serving the file. If data/registros.csv doesn’t exist, it creates an empty CSV with headers:
def ensure_csv ():
if not os.path.exists( DATA_PATH ):
with open ( DATA_PATH , "w" , newline = "" , encoding = "utf-8" ) as f:
csv.writer(f).writerow( CSV_HEADER )
This means the endpoint will always return a valid CSV file, even if no records exist:
id, fecha_hora, matricula, propietario, tipo_vehiculo, observacion, imagen
Character Encoding
The CSV file uses UTF-8 encoding to support international characters in names and observations:
This ensures proper display of characters like:
Spanish: á, é, í, ó, ú, ñ, ¿, ¡
Accented names: Pérez, García, López, Martínez
Special symbols: €, ©, ®
File Size : The CSV file grows linearly with the number of records. Each record adds approximately 100-200 bytes.
Memory Usage : The entire file is loaded into memory before sending. For very large datasets (10,000+ records), consider streaming the response.
Concurrent Access : Multiple simultaneous downloads are safe. The file is read-only during the download operation.
Integration with Other Endpoints
Full Export Workflow
import requests
import os
import csv
from io import StringIO
# 1. Download CSV
csv_response = requests.get( 'http://localhost:5000/descargar' )
records = list (csv.DictReader(StringIO(csv_response.text)))
# 2. Download all images
for record in records:
img_url = f "http://localhost:5000/uploads/ { record[ 'imagen' ] } "
img_response = requests.get(img_url)
if img_response.status_code == 200 :
with open ( f "backup/ { record[ 'imagen' ] } " , 'wb' ) as f:
f.write(img_response.content)
print ( f "Downloaded: { record[ 'imagen' ] } " )
print ( f " \n Backup complete: { len (records) } records and images" )
Security Considerations
Data Exposure : This endpoint exposes all database records without authentication. Implement access control for production environments.
Read-Only : This endpoint only reads data and doesn’t modify the database. It’s safe from data corruption.
No PII Protection : The CSV may contain personal information (names). Ensure compliance with data protection regulations (GDPR, CCPA).
Recommended Enhancements
For production use:
Add authentication :
@app.route ( "/descargar" )
@login_required
def descargar ():
# ... existing code
Add date range filtering :
@app.route ( "/descargar" )
def descargar ():
start_date = request.args.get( 'start' )
end_date = request.args.get( 'end' )
# Filter records by date range
Add compression for large files :
import gzip
import io
# Create gzipped response
output = io.BytesIO()
with gzip.open(output, 'wt' , encoding = 'utf-8' ) as f:
# Write CSV to gzipped file
Implement streaming for large datasets :
from flask import Response
def generate ():
with open ( DATA_PATH , 'r' ) as f:
for line in f:
yield line
return Response(generate(), mimetype = 'text/csv' )
View Records View records in HTML format
Upload Plate Add new records to export
Delete Record Remove records before export