Senior/Lead Data Engineer - Snowflake & Kafka

gurgaon | Technology | Full-time | Partially remote

Apply by: No close date
Apply

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.

We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.

Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe

 

Lead Data Engineer (Snowflake, Kafka, Azure)

Job Responsibilities

 We are seeking a Lead Data Engineer to design and deliver scalable, high-performance data platforms for real-time and batch analytics. The ideal candidate brings deep expertise in Snowflake, strong Kafka streaming experience, ability to build and optimise cloud-scale pipelines.

 

Snowflake Cloud Data Warehouse Integration / Data Engineering (Primary)

  • Design and optimise data ingestion pipelines from multiple sources into Snowflake, ensuring high availability, scalability, and cost efficiency.
  • Implement ELT/ETL patterns, partitioning, clustering, and performance tuning for large datasets in Snowflake.
  • Ensure reliable, cost-efficient, and high-performance data availability for analytics and BI

Design and Develop Data Pipelines

  • Architect, design, and build scalable batch and real-time data pipelines to process large-scale data using Kafka, Azure Data Engineering services, and PySpark.
  • Ensure seamless integration of real-time streaming data from Kafka into Snowflake Cloud Data Warehouse.
  • Implement both streaming (low-latency) and batch (high-volume) processing pipelines, optimised for performance and reliability.

Real-Time Data Streaming (Kafka)

  • Development, design and implementation of event-driven architectures using Kafka.
  • Develop robust mechanisms for topic design, stream partitioning, consumer groups, schema management, and monitoring.
  • Ensure high-throughput, low-latency stream processing and data reliability.

Collaboration with Cross-functional Teams

  • Partner with data scientists, ML engineers, and business stakeholders to deliver high-quality datasets.
  • Translate business requirements into scalable, real-time, and reusable data engineering solutions.

 

Required Skills and Qualifications

  • 5+ years of Data Engineering experience, Development & Design for handling large-scale data projects / enterprise-scale Snowflake solutions..
  • Strong expertise in Snowflake – ELT/ETL pipelines, performance tuning, query optimisation, Snowflake-specific features (streams, tasks, warehouses).
  • Hands-on experience Kafka – Topic design, partitioning, schema registry, consumer/producer tuning, and real-time data processing.
  • Advanced SQL skills with proven ability to design and optimise queries and transformations at scale.
  • Programming expertise in Python & PySpark, with ability to build distributed, high-performance data processing frameworks.
  • Proven experience delivering real-time + batch data pipelines in a cloud environment.
  • Knowledge working with Azure Data Engineering stack – Data Factory, Event Hub, Functions, Synapse, Data Lake, and scaling/load balancing strategies.

 

Preferred Skills

  • Familiarity with Azure Data Engineering stack (ADF, Event Hub, Synapse, Data Lake).
  • Knowledge of stream processing frameworks (Spark Streaming, Flink).
  • Exposure to ML/AI integration or metadata-driven data management.

Strong communication and leadership skills to work with global teams in fast-paced consulting projects.