
Southwest's flight-ops team needed real-time visibility into the Elastic Flat Files (EFFs) that ground systems emit during every flight — currently arriving by FTP, batched, and analyzed hours later. We replaced the batch path with a Kafka-based stream that surfaces phase, position, and anomalies within seconds.
Sr. Application Engineer / Data Scientist on a team of 12. Designed the Kafka topology, owned the EFF → S3 → Kafka pipeline, and led the Python → Java port for the production hot-path.
RabbitMQ between ingestion services for fan-out (analytics + cold-storage + alerting consumers). Kafka for the high-throughput tracking events. No service knows about another's existence.
Built a local-stack S3 + Kafka inside a multi-pod Kubernetes setup that mirrors prod. Every engineer gets their own fully-isolated environment in under 30 seconds.
Prototyped transforms in Python (fast iteration), then ported the proven ones to Java with Gradle builds and 95% unit-test coverage. Best of both speeds.
From 4-hour batch to sub-second visibility on every flight event.
Java ports landed with unit suites that catch ~95% of regressions before merge.
An entire constellation of cron + shared FS jobs collapsed into one event-driven topology.