fire_ml.py¶
MLP fire detector with Dice+BCE loss. Trains on spectral features (T4, T11, delta-T, SWIR) and evaluates on a held-out flight.
fire_ml.py - ML-based fire detection using Dice Loss.
Trains a small neural network to classify fire vs. non-fire pixels from MASTER L1B data using 4 features:
T4: Brightness temperature at 3.9 um (fire channel) [K]
T11: Brightness temperature at 11.25 um (background channel) [K]
dT: T4 - T11 spectral difference [K]
SWIR: Radiance at 2.16 um (solar reflection channel) [W/m2/sr/um]
SWIR helps distinguish solar reflection false positives from real fire: sun-heated rock reflects strongly in SWIR, while fire emission at 2.16 um is relatively low compared to its 3.9 um signal.
The loss function is Soft Dice Loss:
Loss = 1 - 2*TP / (2*TP + FP + FN)
This loss operates on absolute TP/FP/FN counts — total pixel count never appears, so it is not diluted by adding more background pixels. Every false positive and every missed fire directly degrades the score.
- Train/test split is by flight:
Train: flights 03 (pre-burn), 04 (day burn), 05 (night burn) Test: flight 06 (day burn, unseen)
Labels are pseudo-labels from the threshold detector (T4 > 325K/310K AND ΔT > 10K), so the ML model learns a more nuanced decision boundary than the hard thresholds.
- Usage:
python fire_ml.py
- fire_ml.load_flight_data(flights)[source]¶
Build mosaics for all flights, return dict of grids and metadata.
- Returns:
- {‘T4’: grid, ‘T11’: grid, ‘dT’: grid,
’labels’: grid, ‘lat_axis’: arr, ‘lon_axis’: arr, ‘day_night’: str, ‘comment’: str}}
- Return type:
{flight_num
- fire_ml.extract_pixels(flight_data, flight_nums)[source]¶
Extract valid pixels from specified flights into flat arrays.
- Returns:
(N, 4) float32 — features [T4, T11, ΔT, SWIR] y: (N,) float32 — binary labels {0, 1}
- Return type:
X
- fire_ml.oversample_minority(X, y, ratio=1.0)[source]¶
Oversample the minority (fire) class to balance training data.
- Parameters:
ratio – target fire/no-fire ratio. 1.0 = equal counts.
- Returns:
balanced arrays with fire pixels repeated.
- Return type:
X_bal, y_bal
- class fire_ml.FireDetector(*args, **kwargs)[source]¶
Pixel-level fire detection MLP.
Architecture: 4 → 64 → 32 → 1 (2,337 parameters). Learns a nonlinear decision boundary in (T4, T11, ΔT, SWIR) space.
- class fire_ml.SoftDiceLoss(*args, **kwargs)[source]¶
Differentiable Dice Loss for binary classification.
Loss = 1 - (2·TP + smooth) / (2·TP + FP + FN + smooth)
Soft TP/FP/FN use sigmoid probabilities, making the loss fully differentiable for gradient descent.
True Negatives do NOT appear in the formula — the loss is not diluted by background pixel count. 100 FP out of 1,000 pixels gives the same loss as 100 FP out of 1,000,000 pixels.
- class fire_ml.DiceBCELoss(*args, **kwargs)[source]¶
Combined Dice + BCE loss.
Dice Loss drives the global TP/FP/FN metric toward optimal. BCE provides per-pixel gradient signals that help early training converge when Dice alone gets stuck in a local minimum.
Loss = dice_weight * DiceLoss + bce_weight * BCE
- fire_ml.train_model(X_train, y_train, n_epochs=200, lr=0.001, batch_size=65536)[source]¶
Train FireDetector with Dice+BCE Loss.
Uses mini-batch training with large batches (64K) for stable Dice Loss gradients while remaining memory-efficient.
- Returns:
trained FireDetector loss_history: list of per-epoch average loss values
- Return type:
model
- fire_ml.evaluate(model, X, y, threshold=0.5)[source]¶
Evaluate model with absolute count metrics.
- Returns:
dict with TP, FP, FN, TN, precision, recall, dice_score probs: (N,) predicted probabilities
- Return type:
metrics
- fire_ml.sklearn_baseline(X_train, y_train, X_test, y_test)[source]¶
Logistic regression baseline for comparison.
- fire_ml.plot_decision_boundary(model, train_mean, train_std, X_test, y_test, day_night='D')[source]¶
Plot learned decision boundary in T4 vs ΔT space.
Shows the ML boundary (colored contour) alongside the hard threshold lines used by the current detector, with scatter data overlaid.
- fire_ml.plot_prediction_map(flight_data, model, train_mean, train_std, flight_num)[source]¶
Plot spatial fire predictions on a flight mosaic.
- 2x2 layout:
Top-left: ML fire predictions (red) on gray T4 background Top-right: Threshold fire labels (red) on gray T4 background Bottom-left: Agreement/disagreement map Bottom-right: ML probability heatmap
- fire_ml.plot_fp_fn_comparison(flight_data, model, train_mean, train_std, flight_num)[source]¶
Plot spatial locations of FP and FN pixels for a flight.
- Compared against the threshold pseudo-labels:
TP (green): both ML and threshold detect
FP (blue): ML detects but threshold does not
FN (orange): threshold detects but ML misses