I design data systems that move from raw ingestion to business intelligence — fast, reliable, and scalable.
I'm a Big Data Engineer with 3+ years turning chaotic, multi-source data into clean, high-performance pipelines. My engineering philosophy is simple: every byte of data has a story — my job is to build the infrastructure that lets that story be told, at scale, without error.
My origin story is unusual. I first encountered Big Data inside the walls of India's central bank — the Reserve Bank of India. The project demanded strict on-premise infrastructure (the data couldn't leave the vault), and it was on Hadoop that I fell in love with distributed systems.
I was a core engineer on an end-to-end SDMX-compliant ETL system that replaced the RBI's manual Excel/XBRL workflows — connecting India's entire regulated banking sector to the IMF, WTO, and BIS in real-time. That project taught me that data engineering isn't just about moving bytes — it's about enabling institutional trust at a global scale.
Today at Xome, the domain changed from banking to real estate, but the engineering discipline didn't. I work on large-scale data platforms where parallel data workflows operate across distributed cloud processing and established transformation frameworks, ensuring systems remain stable, observable, and ready to evolve — the tools changed, the obsession didn't.
"The RBI vault had no internet. No cloud. No comfort zone. That's where I built my foundations — and where I found my calling inside a Hadoop cluster."
The tools that built billions of records. Select a pipeline to trace its path — then hover any node to read its role.
Four eras. One continuous climb. Hover any dot on the curve to read the full story behind each milestone — from algorithmic obsessions in college to engineering data platforms at scale.
Bachelor of Technology in Computer Science at NBKR Institute of Science and Technology, graduating with a CGPA of 8.11. Experimented across ML and Full Stack before finding my calling in data. Competitive programming became an obsession — earning a global rank of #30 out of 7,000+ participants on CodeChef. The algorithmic mindset I built here would define every pipeline I'd engineer later: think at scale, optimize relentlessly.
Worked on SDMX-compliant financial data exchange systems within a regulated banking environment, helping transform manual reporting workflows into automated pipelines aligned with international standards. The platform enabled structured data delivery to global institutions including the IMF, WTO, and BIS while maintaining strict engineering discipline across a highly regulated domain.
Transitioned from regulated banking systems into large-scale real-estate data platforms, modernizing parallel data workflows while preserving engineering discipline across domains. My role focuses on stabilizing high-throughput pipelines, scaling distributed processing, and maintaining reliable data foundations that support business intelligence and machine learning.
From Hadoop vaults to Azure clouds. The tools evolved, but the obsession stayed constant: building data systems that don't break trust. Currently learning MongoDB for the next phase — API-driven data delivery with on-demand schema mapping and real-time transformations. The pipeline never stops flowing.
Whether you're hiring, collaborating, or want to talk data architecture — I'm always open to a good conversation.