Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

P018 — Inertial-Driven GPS Upsampling

RoadSimulator3

Notebook 01 — Multi-dataset EKF validation

Ce notebook compare les 3 méthodes d’upsampling GPS (linear baseline, naive inertial, EKF proposé) sur deux datasets du catalog Telemachus 0.7 :

  • fr_clermont_proto_2025-09 — primary_validation, 14 trips livraison urbaine, 67 km, 50 Hz IMU effective

  • us_greensboro_fmc880_2026-04 — commercial_validation, Teltonika FMC880 post daxos_v0.1, trips routiers US

Point scientifique : la méthode P018 repose sur le prédicteur yaw-rate omega = a_y / v issu de P013 — donc aucun gyroscope requis. Elle doit transférer du prototype au hardware commercial.


import sys
from pathlib import Path
import yaml
import duckdb
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

NB_DIR = Path.cwd()
TELEFORGE_ROOT = NB_DIR.parent.parent.parent.parent
NOSTOS_ROOT = TELEFORGE_ROOT.parent / "nostos"
NOSTOS_SRC = NOSTOS_ROOT / "src"
if str(NOSTOS_SRC) not in sys.path:
    sys.path.insert(0, str(NOSTOS_SRC))

from nostos.context import TelematicsContext
from nostos.stages.d0_trip_detector import TripDetectorStage
from nostos.stages.d1_gps_cleaner import GPSCleanerStage
from nostos.stages.d1_gps_upsampler import GPSUpsamplerStage

print("Nostos stages loaded")
Nostos stages loaded

1. Charger les manifests

def load_manifest(dsid):
    with open(TELEFORGE_ROOT / "datasets" / dsid / "manifest.yaml") as f:
        return yaml.safe_load(f)

ds_ids = ["fr_clermont_proto_2025-09", "us_greensboro_fmc880_2026-04"]
manifests = {d: load_manifest(d) for d in ds_ids}

for dsid, m in manifests.items():
    print(f"{dsid}")
    print(f"  country={m['country']}  n_trips={m['volume']['n_trips']}  dist={m['volume'].get('distance_km','?')} km")
    print(f"  hardware={m['hardware']['vendor']} {m['hardware']['model']} ({m['hardware']['class']})")
fr_clermont_proto_2025-09
  country=FR  n_trips=14  dist=67 km
  hardware=Fluidy experimental_telematics_prototype (prototype)
us_greensboro_fmc880_2026-04
  country=US  n_trips=23  dist=61.4 km
  hardware=Teltonika FMC880 (commercial)

2. Helper : run upsampling + mesurer la cohérence

Pour un trip donné et une méthode d’upsampling, on lance GPSCleanerStage + GPSUpsamplerStage et on mesure :

  • la proportion de samples upsamplés (n_upsampled / n_input)

  • la longueur totale de la trace upsamplée (approximée par cumul de distances haversine)

  • le rayon de courbure médian estimé par cercle 3-points

La validation géométrique contre IGN BD TOPO (résultat clé -22.3% P95) nécessite un référentiel carto non embarqué ici — on reporte donc les chiffres pré-calculés du requirements.yaml pour Clermont, et on fait tourner le pipeline pour mesurer ce qu’on peut mesurer sans IGN.

def haversine_km(lat1, lon1, lat2, lon2):
    R = 6371.0
    lat1, lon1, lat2, lon2 = map(np.radians, [lat1, lon1, lat2, lon2])
    dlat = lat2 - lat1
    dlon = lon2 - lon1
    a = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2)**2
    return R * 2 * np.arcsin(np.sqrt(a))

def trip_length_km(df):
    if 'lat' not in df.columns:
        return 0.0
    lat = df['lat'].dropna().to_numpy()
    lon = df['lon'].dropna().to_numpy()
    if len(lat) < 2: return 0.0
    return float(np.sum(haversine_km(lat[:-1], lon[:-1], lat[1:], lon[1:])))

def run_upsampling(trip_df, method, country='FR'):
    ctx = TelematicsContext(
        cfg={'gps_upsampler': {'method': method}},
        df=trip_df.copy(),
        meta={'device_id': '?', 'country': country, 'hz': 10},
    )
    GPSCleanerStage().run(ctx)
    res = GPSUpsamplerStage(cfg={'method': method}).run(ctx)
    return ctx.df, res.ok, res.msg

print("Helpers ready")
Helpers ready

3. Dataset 1 — Clermont : run EKF vs linear sur 3 trips représentatifs

On ne boucle pas sur les 14 trips pour garder le notebook rapide à exécuter ; on prend 3 trips de longueurs différentes.

clermont_parquet = (
    TELEFORGE_ROOT / "datasets" / "fr_clermont_proto_2025-09"
    / manifests['fr_clermont_proto_2025-09']['data_files'][0]['path']
).resolve()
df_clermont = pd.read_parquet(clermont_parquet)
print(f"Loaded {len(df_clermont):,} Clermont samples, trips={sorted(df_clermont['trip_idx'].unique())}")

# Prendre 3 trips de tailles variées
trip_sizes = df_clermont.groupby('trip_idx').size().sort_values()
sampled_trips = [trip_sizes.index[len(trip_sizes)//4],
                 trip_sizes.index[len(trip_sizes)//2],
                 trip_sizes.index[-2]]
print(f"\n3 trips choisis : {sampled_trips}")
Loaded 351,356 Clermont samples, trips=[np.int64(0), np.int64(1), np.int64(2), np.int64(3), np.int64(4), np.int64(5), np.int64(6), np.int64(7), np.int64(8), np.int64(9), np.int64(10), np.int64(11), np.int64(12), np.int64(13)]

3 trips choisis : [np.int64(1), np.int64(8), np.int64(12)]
clermont_results = []
for tid in sampled_trips:
    trip = df_clermont[df_clermont['trip_idx'] == tid].copy()
    print(f"--- trip {tid}, {len(trip)} samples ---")
    for method in ('linear', 'ekf'):
        try:
            out_df, ok, msg = run_upsampling(trip, method, country='FR')
            clermont_results.append({
                'dataset': 'clermont',
                'trip': f'clermont_{tid}',
                'method': method,
                'n_input': len(trip),
                'n_output': len(out_df),
                'upsample_factor': len(out_df) / max(len(trip), 1),
                'length_km': trip_length_km(out_df),
                'pipeline_ok': ok,
            })
            print(f"  {method:<8} {ok}  {msg}")
        except Exception as e:
            print(f"  {method:<8} FAIL {e}")
            clermont_results.append({
                'dataset': 'clermont', 'trip': f'clermont_{tid}', 'method': method,
                'pipeline_ok': False, 'error': str(e),
            })

clermont_runs = pd.DataFrame(clermont_results)
clermont_runs
Burst sampling: 50 frames @ 50 Hz, effective 25 Hz (gap 1020 ms)
Burst sampling: 50 frames @ 50 Hz, effective 25 Hz (gap 1020 ms)
--- trip 1, 13034 samples ---
  linear   True  déjà à 50 Hz, pas de NaN — skip
Burst sampling: 50 frames @ 50 Hz, effective 25 Hz (gap 1020 ms)
  ekf      True  déjà à 50 Hz, pas de NaN — skip
--- trip 8, 21257 samples ---
Burst sampling: 50 frames @ 50 Hz, effective 25 Hz (gap 1020 ms)
Burst sampling: 50 frames @ 50 Hz, effective 25 Hz (gap 1020 ms)
  linear   True  déjà à 50 Hz, pas de NaN — skip
  ekf      True  déjà à 50 Hz, pas de NaN — skip
--- trip 12, 42007 samples ---
Burst sampling: 50 frames @ 50 Hz, effective 25 Hz (gap 1020 ms)
  linear   True  déjà à 50 Hz, pas de NaN — skip
  ekf      True  déjà à 50 Hz, pas de NaN — skip
Loading...

4. Dataset 2 — us_greensboro Teltonika commercial

duckdb_path = NOSTOS_ROOT / "data" / "flespi" / "storage" / "telemetry.duckdb"
conn = duckdb.connect(str(duckdb_path), read_only=True)
us_trips_meta = conn.execute("""
    SELECT trip_id, device_id, distance_km, n_samples, parquet_path
    FROM trips
    WHERE carrier_state = 'mounted_driving'
      AND distance_km > 1.0
      AND ts_start >= '2026-04-10 11:06:00'
    ORDER BY distance_km DESC
    LIMIT 3
""").fetchdf()
conn.close()
print(f"3 plus longs trips us_greensboro :")
us_trips_meta
3 plus longs trips us_greensboro :
Loading...
us_results = []
for _, row in us_trips_meta.iterrows():
    pq = Path(row['parquet_path'])
    if not pq.exists():
        alt = NOSTOS_ROOT / "data" / "flespi" / "trips" / str(row['device_id']) / f"{row['trip_id']}.parquet"
        if alt.exists():
            pq = alt
    if not pq.exists():
        print(f"  skip {row['trip_id']} — parquet absent")
        continue
    trip = pd.read_parquet(pq)
    print(f"--- {row['trip_id'][:30]}, {len(trip)} samples, {row['distance_km']:.2f} km ---")
    for method in ('linear', 'ekf'):
        try:
            out_df, ok, msg = run_upsampling(trip, method, country='US')
            us_results.append({
                'dataset': 'us_greensboro',
                'trip': row['trip_id'][-15:],
                'method': method,
                'n_input': len(trip),
                'n_output': len(out_df),
                'upsample_factor': len(out_df) / max(len(trip), 1),
                'length_km': trip_length_km(out_df),
                'pipeline_ok': ok,
            })
            print(f"  {method:<8} {ok}  {msg}")
        except Exception as e:
            print(f"  {method:<8} FAIL {e}")
            us_results.append({
                'dataset': 'us_greensboro', 'trip': row['trip_id'][-15:], 'method': method,
                'pipeline_ok': False, 'error': str(e),
            })

us_runs = pd.DataFrame(us_results)
us_runs
Burst sampling: 5 frames @ 0 Hz, effective 0 Hz (gap 11000 ms)
Burst sampling: 5 frames @ 0 Hz, effective 0 Hz (gap 11000 ms)
Burst sampling: 21 frames @ 0 Hz, effective 0 Hz (gap 29000 ms)
Burst sampling: 21 frames @ 0 Hz, effective 0 Hz (gap 29000 ms)
Burst sampling: 8 frames @ 0 Hz, effective 0 Hz (gap 37000 ms)
--- 860813070899989_T20260410_1201, 307 samples, 11.82 km ---
  linear   True  0 Hz → 10 Hz — 12961 points
  ekf      True  0 Hz → 10 Hz — 12961 points
--- 860813070899989_T20260412_1933, 200 samples, 10.44 km ---
  linear   True  0 Hz → 10 Hz — 8841 points
  ekf      True  0 Hz → 10 Hz — 8841 points
--- 860813070899989_T20260411_1412, 204 samples, 9.51 km ---
Burst sampling: 8 frames @ 0 Hz, effective 0 Hz (gap 37000 ms)
  linear   True  0 Hz → 10 Hz — 12081 points
  ekf      True  0 Hz → 10 Hz — 12081 points
Loading...

5. Résultats clés géométriques (pré-calculés IGN BD TOPO)

La validation géométrique contre IGN BD TOPO (requise pour mesurer la déviation latérale) nécessite un accès au référentiel carto. Les chiffres suivants sont ceux documentés dans requirements.yaml du paper, issus d’un run précédent de scripts.p018_ign_validation sur le dataset Clermont complet (14 trips).

# Chiffres IGN BD TOPO (Clermont 14 trips) — cf requirements.yaml key_results
ign_validation = pd.DataFrame([
    {'method': 'Linear baseline',   'mean_dev_m': 5.18, 'p95_dev_m': 15.78, 'median_R_m': 14},
    {'method': 'Naive inertial',    'mean_dev_m': 5.22, 'p95_dev_m': 17.00, 'median_R_m': 269},
    {'method': 'EKF (P018)',        'mean_dev_m': 4.59, 'p95_dev_m': 12.26, 'median_R_m': 8},
])
print(ign_validation.to_string(index=False))

base_mean = ign_validation.iloc[0]['mean_dev_m']
base_p95 = ign_validation.iloc[0]['p95_dev_m']
ekf_mean = ign_validation.iloc[2]['mean_dev_m']
ekf_p95 = ign_validation.iloc[2]['p95_dev_m']
gain_mean = 100 * (ekf_mean - base_mean) / base_mean
gain_p95 = 100 * (ekf_p95 - base_p95) / base_p95
print(f"\nGain EKF vs linear :")
print(f"  mean lateral deviation : {gain_mean:+.1f}%")
print(f"  P95 lateral deviation  : {gain_p95:+.1f}%")
print(f"  → -22.3% sur P95 (worst-case accuracy)")
         method  mean_dev_m  p95_dev_m  median_R_m
Linear baseline        5.18      15.78          14
 Naive inertial        5.22      17.00         269
     EKF (P018)        4.59      12.26           8

Gain EKF vs linear :
  mean lateral deviation : -11.4%
  P95 lateral deviation  : -22.3%
  → -22.3% sur P95 (worst-case accuracy)

6. Figure cross-dataset

Deux figures côte à côte :

  • Gauche : validation IGN (Clermont, pré-calculée) — montre le gain -22.3% P95

  • Droite : comparaison du upsampling factor sur les trips Clermont et us_greensboro

fig, axes = plt.subplots(1, 2, figsize=(13, 5))

# === Left: IGN validation (Clermont, pre-computed) ===
ax = axes[0]
x = np.arange(len(ign_validation))
width = 0.35
colors = ['#888888', '#E87700', '#2DC653']
bars1 = ax.bar(x - width/2, ign_validation['mean_dev_m'], width,
               label='Mean lateral deviation', color=colors, alpha=0.85)
bars2 = ax.bar(x + width/2, ign_validation['p95_dev_m'], width,
               label='P95 lateral deviation', color=colors, alpha=0.55, hatch='//')
ax.set_xticks(x)
ax.set_xticklabels([m.replace(' ', '\n') for m in ign_validation['method']])
ax.set_ylabel('Lateral deviation vs IGN BD TOPO (m)')
ax.set_title('Clermont 14 trips — IGN validation\n(from requirements.yaml)')
ax.legend()
ax.grid(True, alpha=0.3, axis='y')
for bars in (bars1, bars2):
    for bar in bars:
        h = bar.get_height()
        ax.text(bar.get_x() + bar.get_width()/2, h + 0.15, f'{h:.2f}',
                ha='center', va='bottom', fontsize=8)

# === Right: upsample factor both datasets ===
ax = axes[1]
combined = pd.concat([clermont_runs, us_runs], ignore_index=True)
combined = combined[combined.get('pipeline_ok', False) == True]
if len(combined):
    pivot = combined.pivot_table(
        index='dataset', columns='method', values='upsample_factor', aggfunc='mean',
    )
    pivot.plot(kind='bar', ax=ax, color=['#0066CC', '#2DC653'], alpha=0.85)
    ax.set_ylabel('Upsample factor (n_output / n_input)')
    ax.set_title('Upsample factor per dataset & method\n(pipeline runs)')
    ax.tick_params(axis='x', rotation=0)
    ax.grid(True, alpha=0.3, axis='y')
    ax.legend(title='method')
else:
    ax.text(0.5, 0.5, 'No pipeline runs to plot', ha='center', va='center',
            transform=ax.transAxes)

plt.tight_layout()
plt.savefig('p018_cross_dataset_ekf.png', dpi=120, bbox_inches='tight')
plt.show()
<Figure size 1300x500 with 2 Axes>

7. Sauvegarde

combined = pd.concat([clermont_runs, us_runs], ignore_index=True)
combined.to_csv('p018_cross_dataset_runs.csv', index=False)
ign_validation.to_csv('p018_ign_validation_key_results.csv', index=False)
print(f"✓ p018_cross_dataset_runs.csv ({len(combined)} rows)")
print(f"✓ p018_ign_validation_key_results.csv ({len(ign_validation)} rows)")
✓ p018_cross_dataset_runs.csv (12 rows)
✓ p018_ign_validation_key_results.csv (3 rows)

Conclusion

Résultat scientifique principal (Clermont 14 trips, IGN BD TOPO) : -22.3% sur la déviation latérale P95, sans gyroscope, avec l’EKF P018 utilisant le prédicteur yaw-rate P013.

Transfert au commercial : l’EKF tourne sur les trips Teltonika us_greensboro sans modification du pipeline. Le volume de données est aujourd’hui trop faible (trips de 1-4 km) pour produire une validation statistique IGN équivalente — mais dès qu’un roulage 20-30 min sera disponible (prévu ce week-end avec Emmanuel), le notebook sera ré-exécuté automatiquement et produira la validation cross-dataset complète.

Datasets consommés (cf requirements.yaml) :

  • fr_clermont_proto_2025-09 (primary_validation)

  • us_greensboro_fmc880_2026-04 (commercial_validation, filter: acc_period.frame = raw)

Référence carto :

  • IGN BD TOPO (réseau routier FR au mètre) — used_for = ground truth géométrique

  • OSRM OSM FR — évité explicitement, cf P008 (OSM surestime la courbure de 2.6× vs IGN)