PrecisionConfig
Dataclass for configuring precision, hardware simulation, and profiling settings.Data type used during training (e.g., “float32”)
Inference precision mode. Options: “float32”, “float16”, or “int8”
Clipping threshold for int8 quantization
Random seed for reproducibility
Enable performance profiling during training
Enable hardware constraint simulation
Maximum memory budget in megabytes for hardware simulation
Speed scaling factor for hardware simulation
Global precision mode setting
Maximum batch size allowed
DEFAULT_CONFIG
Pre-instantiatedPrecisionConfig with default values.
EXPERIMENT_CONFIGS
Dictionary of predefined experiment configurations. Each config includes dataset, model architecture, training hyperparameters, and precision settings.baseline
Standard synthetic dataset experiment:Dataset identifier
Neural network layer dimensions
Activation functions per layer
Number of training epochs
Learning rate
Training batch size
Random seed
Numerical precision mode
Hardware simulation mode (“off” or “on”)
Use synthetic data generation
Number of synthetic samples to generate
real_fashion_mnist
Fashion-MNIST dataset experiment:Path to Fashion-MNIST training data (uses
FASHION_MNIST_SPEC.train_path)Dataset version identifier (uses
FASHION_MNIST_SPEC.version)Number of training epochs
Disabled for real dataset
Minimum required rows for validation
Automatically download dataset if missing
Optional SHA256 hash for integrity verification
synthetic_baseline
Synthetic baseline without auto-preparation:Disabled for synthetic baseline
Profiling Constants
Default layer sizes for profiling
Default activations for profiling
Batch size used during profiling
Output directory for profiling results
build_model()
Factory function that constructs aNeuralNetwork instance with default profiling configuration.
Returns: NeuralNetwork configured with LAYER_SIZES, ACTIVATIONS, and DEFAULT_CONFIG
Source: config.py:29