Skip to main content
The Flask app (app.py) exposes a browser UI and two API endpoints for listing and classifying test-set videos. It dynamically imports test_already_extracted.py at startup to reuse SingleVideoFeatureLoader and SingleVideoClassifier.

Starting the server

python app.py
The server binds to http://0.0.0.0:5000 with debug mode enabled:
Index HTML path: /path/to/Flask Local New/index.html
User script path: /path/to/Flask Local New/test_already_extracted.py
Features dir: /path/to/Flask Local New/features_enhanced
Models dir: /path/to/Flask Local New/models_enhanced
Processed test dir: /path/to/Flask Local New/data/processed/test
If test_already_extracted.py is not found next to app.py or in /mnt/data/, the server raises a RuntimeError and will not start.

Configuration

All paths are configured at the top of app.py. The server first looks for directories relative to app.py (BASE_DIR), then falls back to /mnt/data/ (Linux), then D:\ (Windows).
BASE_DIR = Path(__file__).resolve().parent

FEATURES_DIR = BASE_DIR / "features_enhanced"
MODELS_DIR   = BASE_DIR / "models_enhanced"
PROCESSED_TEST_DIR = BASE_DIR / "data" / "processed" / "test"

Fallback path resolution

For each directory, if the local path does not exist, the app checks two alternative locations:
if not FEATURES_DIR.exists():
    alt  = Path("/mnt/data/features_enhanced")
    alt2 = Path("D:/features_enhanced")
    if alt.exists():
        FEATURES_DIR = alt
    elif alt2.exists():
        FEATURES_DIR = alt2
The same pattern applies to MODELS_DIR and PROCESSED_TEST_DIR. This makes the app portable across local development machines and remote Linux servers.

Selected checkpoints

SELECTED_CHECKPOINTS = [
    "best_ensemble_model_1.pt",
    "best_ensemble_model_2.pt",
    "best_ensemble_model_3.pt",
    "best_ensemble_model_4.pt",
]
Checkpoints are loaded from MODELS_DIR. If a listed checkpoint is missing, the app logs a warning and skips it. If none are found, it falls back to any .pt file in MODELS_DIR. If that also fails, it raises a RuntimeError.

API endpoints

GET /

Serves index.html — the browser-based classification UI. Response: text/html — the contents of index.html. Error: Returns HTTP 500 with a plain-text message if index.html is not found.

GET /api/videos

Returns all available test videos grouped by category. Request: No parameters. Response
{
  "success": true,
  "videos": {
    "Animation": ["video_001.mp4", "video_002.mp4"],
    "Flat_Content": ["video_010.mp4"],
    "Gaming": ["video_020.mp4", "video_021.mp4"],
    "Natural_Content": ["video_030.mp4"]
  }
}
success
boolean
required
true when the video list was loaded successfully.
videos
object
required
Dictionary mapping category name to a list of video filenames available in the test set.
Implementation
@app.route("/api/videos", methods=["GET"])
def api_videos():
    try:
        fl = get_feature_loader()
        all_videos, videos_by_category = fl.list_available_videos()
        return jsonify({"success": True, "videos": videos_by_category})
    except Exception as e:
        app.logger.exception("Failed to list videos")
        return jsonify({"success": False, "error": str(e), "trace": traceback.format_exc()}), 500

POST /api/classify

Classifies a single video from the test set by loading its pre-extracted features and running ensemble inference. Request (multipart/form-data)
selected_video
string
required
Filename of the video to classify. Must be a key present in the feature loader’s video index. Obtain valid names from GET /api/videos.
use_tta
string
default:"false"
Whether to apply Test-Time Augmentation. Pass "true" to enable TTA (4 augmentation modes: original, reversed, sped-up, slowed-down). Pass "false" or omit for standard ensemble inference.
Response
{
  "success": true,
  "result": {
    "category": "Gaming",
    "confidence": 94.37,
    "model_scores": [
      {
        "model": "best_ensemble_model_1.pt",
        "confidence": 88.12,
        "probs": [2.1, 3.4, 88.12, 6.38]
      },
      {
        "model": "best_ensemble_model_2.pt",
        "confidence": 95.80,
        "probs": [1.2, 1.5, 95.80, 1.50]
      }
    ],
    "all_scores": {
      "Animation": 1.65,
      "Flat_Content": 2.45,
      "Gaming": 94.37,
      "Natural_Content": 1.53
    },
    "video_name": "video_020.mp4"
  }
}
success
boolean
required
true when classification completed without error.
result
object
required
Implementation
@app.route("/api/classify", methods=["POST"])
def api_classify():
    use_tta = request.form.get("use_tta", "false").lower() == "true"
    selected_video = request.form.get("selected_video", None)

    if not selected_video:
        return jsonify({"success": False, "error": "No selected_video provided"}), 400

    features, label, category_name = fl.load_video_features(selected_video)
    if features is None:
        return jsonify({"success": False, "error": "Selected video not found in dataset"}), 400

    if not isinstance(features, torch.Tensor):
        features = torch.from_numpy(features)

    probs = clf.predict_with_tta(features) if use_tta else clf.predict_standard(features)

    final_probs    = probs.numpy().tolist()
    predicted_idx  = int(torch.tensor(final_probs).argmax())
    predicted_class = clf.class_names[predicted_idx]
    predicted_conf  = float(final_probs[predicted_idx] * 100.0)

    model_scores = per_model_scores(clf, features)
    # ... build response ...
    return jsonify({"success": True, "result": result})

Per-model score breakdown

The per_model_scores helper runs each loaded model independently and returns raw softmax probabilities before ensemble averaging:
def per_model_scores(classifier, features):
    scores = []
    device = classifier.device
    features_batch = features.unsqueeze(0).to(device)  # [1, T, D]
    lengths = torch.tensor([features.shape[0]], device=device)

    with torch.no_grad():
        for i, model in enumerate(classifier.models):
            outputs = model(features_batch, lengths)  # [1, C]
            probs = F.softmax(outputs, dim=1).squeeze(0).cpu().numpy()
            model_name = Path(classifier.checkpoint_paths[i]).name
            scores.append({
                "model": model_name,
                "probs": probs.tolist()
            })
    return scores
The UI uses this data to render a per-model confidence breakdown alongside the ensemble result.

Error responses

All error responses follow the same structure:
{
  "success": false,
  "error": "No selected_video provided",
  "trace": "Traceback (most recent call last):\n  ..."
}
success
boolean
required
Always false for error responses.
error
string
required
Human-readable error message.
trace
string
Full Python traceback string. Present on 500 errors; absent on 400 validation errors.
HTTP status codes used:
CodeCondition
200Successful response
400Missing or invalid request parameters
500Server-side error (initialization failure, classification error)

Build docs developers (and LLMs) love