Metrics Displayed
The results table (salida) shows detailed metrics for each process. Understanding these values is crucial for evaluating scheduling algorithm performance.
Process-Level Metrics
For each process (P1, P2, P3, etc.), the following metrics are displayed:- Llegada (Arrival Time)
- Inicio (Start Time)
- Fin (Finish Time)
- Espera (Waiting Time)
- Retorno (Turnaround Time)
Definition: The time unit when the process enters the system.Display: Original input value you provided.Interpretation:
- Processes with arrival time 0 are available immediately
- Later arrivals (1, 2, 3…) join the ready queue at their specified time
- Impacts waiting time calculation
System-Level Metrics
Tiempo Total (Total Time): The time unit when all processes complete.- Equals the maximum finish time among all processes
- Represents the entire simulation duration
- Lower total time doesn’t necessarily mean better algorithm (depends on context)
- Useful for comparing CPU utilization across algorithms
The simulator displays individual process metrics and total time. To calculate average waiting time or average turnaround time, manually sum the values and divide by the number of processes.
How to Read the Gantt Chart
The Gantt chart (ganttLive) is a horizontal timeline showing CPU allocation over time. It’s built block-by-block during the animation.
Gantt Chart Structure
Process Labels
Each block displays:
- “P1”, “P2”, “P3”, etc.: Which process was executing during that time unit
- ”-”: CPU was idle (no process executing)
Visual Appearance
- Blocks are indigo-colored with rounded corners
- Font is small (text-xs) and bold
- Hover effect scales blocks to 110% for emphasis
- Blocks are evenly spaced with a 2-pixel gap
Interpreting Patterns
Continuous Blocks (Non-Preemptive)
Continuous Blocks (Non-Preemptive)
Pattern: Long sequences of the same process label (e.g., P1-P1-P1-P1-P1)Algorithms: FIFO, SJF (non-preemptive), Priority (non-preemptive)Meaning: Process executed uninterrupted until completion. No context switching during execution.Example:This shows P1 ran for 4 units, P2 for 3 units, P3 for 2 units without interruption.
Alternating Blocks (Preemptive)
Alternating Blocks (Preemptive)
Pattern: Frequent changes between process labels (e.g., P1-P2-P3-P1-P2)Algorithms: Round Robin, MLFQMeaning: Processes are being time-shared. Context switching occurs regularly.Example:Round Robin with quantum=2. Each process gets 2 time units before switching.
Idle Periods
Idle Periods
Pattern: Blocks showing ”-” instead of process IDsAlgorithms: Any, but common when processes have staggered arrivalsMeaning: No processes were in the ready queue. CPU had nothing to execute.Example:P1 completes at time 2, but P2 doesn’t arrive until time 4, causing 2 idle units.
Burst Patterns (Round Robin)
Burst Patterns (Round Robin)
Pattern: Repeating sequences with quantum-length segmentsAlgorithm: Round RobinMeaning: Each process gets exactly quantum time units (unless it completes sooner).Example with Quantum=3:Processes alternate in 3-unit bursts.
Priority Shifting (MLFQ)
Priority Shifting (MLFQ)
Pattern: A process appears less frequently over timeAlgorithm: MLFQMeaning: Process is moving to lower priority queues with larger quantums. Gets CPU less often but for longer periods.Example:P2 initially alternates with P1, then gets longer execution blocks as it moves to lower queues.
Queue Visualization
The ready queue display (colaBox) shows which processes are waiting for CPU time at the current simulation step.
Queue Display Features
Visual Appearance:- Gray background container with rounded corners
- Each process is a separate box with padding
- Boxes have pulse animation for visual interest
- Horizontal layout with flex-wrap (wraps on narrow screens)
- Minimum height of 60px (empty queue still shows the container)
- Shows “P1”, “P2”, “P3”, etc.
- Order represents queue order (leftmost is next to execute for FIFO-based queues)
Algorithm-Specific Queue Behavior
- FIFO Queue
- SJF Queue
- Round Robin Queue
- Priority Queue
- MLFQ Queues
Characteristic: First-in, first-out orderWhat you’ll see:
- Processes appear in arrival order
- Leftmost process is next to execute
- Queue only grows when processes arrive
- Queue shrinks from the left as processes start execution
[P2][P3]Timeline Interpretation
The timeline is built incrementally during animation. Each step (every 500ms) corresponds to one time unit in the simulation.Timeline Data Structure
Each step intimelineGlobal contains:
Reading Timeline Progression
Time Advancement
The
tiempo field increments by 1 each step. This is the x-axis of your Gantt chart.CPU State
The
ejecutando field tells you:- Number (0, 1, 2…): Process index currently executing (displayed as P1, P2, P3…)
- null: CPU is idle
Queue State
The
cola field (for RR and MLFQ) shows process indices waiting. For MLFQ, this may be a nested array representing multiple queue levels.Example with Screenshots Descriptions
Let’s walk through a complete Round Robin example with 3 processes.Scenario Setup
Algorithm: Round RobinQuantum: 2
Processes:
- P1: Llegada=0, CPU=5
- P2: Llegada=1, CPU=3
- P3: Llegada=2, CPU=4
Expected Results Table
After clicking “Simular”, the results table (
salida) displays:Metrics Analysis
Process 1:- Starts immediately (arrives at t=0, no competition)
- Finishes at t=11 (5 units of work + 6 units waiting)
- Waiting time = 11 (turnaround) - 5 (CPU) = 6 units
- Gets preempted multiple times to share CPU
- Arrives at t=1 while P1 is executing
- First execution at t=2 (after P1 uses its quantum)
- Finishes at t=9 with 5 units of waiting
- Relatively efficient despite arriving second
- Arrives at t=2, joins queue behind P2
- First execution at t=4
- Finishes last at t=12
- Waiting time matches P1 despite arriving later (fairness of RR)
Visual Timeline Description
Gantt Chart Pattern:t=0 to t=1: P1 executes (quantum part 1)
t=0 to t=1: P1 executes (quantum part 1)
- CPU Box: P1
- Queue: Empty (P1 is executing, no other arrivals yet)
- Gantt: First two blocks are P1
- Status: P1 uses its first quantum (2 units)
t=2 to t=3: P2 executes (quantum part 1)
t=2 to t=3: P2 executes (quantum part 1)
- CPU Box: P2
- Queue: P1 (used its quantum, moved to back), P3 arrives at t=2 and joins queue
- Gantt: Blocks 3-4 are P2
- Status: P2 uses its first quantum (2 units)
t=4 to t=5: P3 executes (quantum part 1)
t=4 to t=5: P3 executes (quantum part 1)
- CPU Box: P3
- Queue: P1, P2 (both waiting for next turn)
- Gantt: Blocks 5-6 are P3
- Status: P3 uses its first quantum (2 units)
t=6 to t=7: P1 executes (quantum part 2)
t=6 to t=7: P1 executes (quantum part 2)
- CPU Box: P1
- Queue: P2, P3 (P1 came from front of queue)
- Gantt: Blocks 7-8 are P1
- Status: P1 uses another 2 units (4 total), needs 1 more
t=8: P2 executes (completes)
t=8: P2 executes (completes)
- CPU Box: P2
- Queue: P3, P1
- Gantt: Block 9 is P2
- Status: P2 needs only 1 unit to complete, finishes before quantum expires
t=9 to t=10: P3 executes (quantum part 2)
t=9 to t=10: P3 executes (quantum part 2)
- CPU Box: P3
- Queue: P1 (P2 completed, removed from system)
- Gantt: Blocks 10-11 are P3
- Status: P3 uses another 2 units (4 total), completes
t=11: P1 executes (completes)
t=11: P1 executes (completes)
- CPU Box: P1
- Queue: Empty (only P1 remains)
- Gantt: Block 12 is P1
- Status: P1 needs 1 final unit, completes
Key Observations
Fair Time Sharing
Each process gets regular CPU time. No process waits excessively long between executions.
Context Switching
The CPU switches between processes 6 times (changes in the Gantt chart). More switching than FIFO but ensures responsiveness.
No Starvation
All processes complete. No process is indefinitely delayed (contrast with Priority scheduling where low-priority processes can starve).
Queue Dynamics
The ready queue is constantly changing as processes cycle through. The animation clearly shows this rotation.
Comparing Algorithms
To evaluate algorithm performance, run the same process set with different algorithms and compare:Average Waiting Time
Average Waiting Time
Sum all waiting times and divide by number of processes.Lower is better - indicates processes spend less time waiting.Which algorithms excel:
- SJF typically has the lowest average waiting time
- FIFO depends heavily on arrival order and CPU burst distribution
- RR has moderate waiting time, trades efficiency for fairness
Average Turnaround Time
Average Turnaround Time
Sum all turnaround times and divide by number of processes.Lower is better - indicates faster overall completion.Which algorithms excel:
- SJF minimizes average turnaround time mathematically
- FIFO can be good if short processes arrive first
- RR has higher turnaround due to context switching overhead
CPU Utilization
CPU Utilization
Count idle blocks in Gantt chart vs. total blocks.Higher is better - less wasted CPU time.Affected by:
- Process arrival patterns (staggered arrivals cause more idle time)
- Algorithm efficiency (all algorithms have same utilization given the same process set)
- Timeline length (idle time / total time)
Fairness
Fairness
Variance in waiting times across processes.Lower variance is better - indicates more equitable treatment.Which algorithms excel:
- RR is most fair, all processes get regular CPU access
- Priority and SJF can be very unfair (some processes wait much longer)
- FIFO fairness depends on arrival order
Response Time
Response Time
Time from arrival to first execution (Inicio - Llegada).Lower is better - indicates better interactivity.Which algorithms excel:
- RR provides excellent response time (processes get CPU quickly)
- Priority gives good response for high-priority processes
- FIFO and SJF may delay later arrivals significantly
Understanding Edge Cases
Performance Insights
Use the results to gain insights into real-world scheduling: FIFO (First In First Out):- Simple, no overhead
- Suffers from “convoy effect” (short processes wait for long ones)
- Good when processes have similar lengths
- Optimal for minimizing average waiting time
- Requires knowing CPU burst times in advance (often impossible in real systems)
- Can starve long processes if short ones keep arriving
- Excellent for time-sharing systems
- Good response time for all processes
- Quantum choice is critical: too small = excessive context switching, too large = degenerates to FIFO
- Reflects real-world importance (e.g., system processes vs. user processes)
- Risk of starvation for low-priority processes
- Can combine with aging (increase priority over time) to prevent starvation
- Adapts to process behavior
- Short processes get better treatment (stay in high-priority queues)
- Long processes gradually move to lower priority
- Balances interactivity and fairness
- Complex but powerful
The simulator provides a simplified environment. Real operating systems use much more complex scheduling algorithms with additional factors like I/O blocking, thread priorities, CPU affinity, and dynamic priority adjustments.