Data Engineering Architect

Delhi NCR, Bangalore | Technology | Full-time | Partially remote

Apply by: No close date
Apply

Company Overview:  

About us  

 We turn customer challenges into growth opportunities.  

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.  

We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.  

Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe  

As a Data Engineer Architect within the product team, you serve as a seasoned member of an agile team to design, build, and maintain scalable and efficient data pipelines leveraging Databricks platform, SQL, and Python. You will work closely with data scientists, software developers, analysts, and other stakeholders to ensure the delivery of high-quality data solutions that meet the needs of the business. You will report to the Principal/Lead Data Engineer.  

Data Engineering Architect
We are seeking a highly skilled and experienced Data Architect with a strong background in the retail domain and exceptional programming
abilities. As a Data Architect, you will play a pivotal role in designing, implementing, and optimizing data architecture to support our retail
business operations and analytics initiatives. Your expertise in Spark programming, optimization techniques, and familiarity with Databricks and
CI/CD practices will be instrumental in ensuring the efficient and effective management of our data ecosystem.
Responsibilities:
1. Collaborate with cross-functional teams to understand business requirements and translate them into scalable data architecture
solutions
2. Design and develop data models, data integration processes, and data pipelines to capture, transform, and load structured and
unstructured data from various retail sources.
3. Hands-on programming in Spark to develop and optimize data processing applications and analytics workflows.
4. Apply optimization techniques to enhance the performance and efficiency of data processing and analytical tasks.
5. Evaluate and implement appropriate tools and technologies, including Databricks, to streamline data operations and ensure scalability
and reliability.
6. Work closely with data engineers to ensure data integrity, consistency, and accessibility across the organization.
7. Define and enforce best practices for data governance and data management, including data quality, metadata management, and data
security.
8. Collaborate with DevOps teams to establish and maintain CI/CD pipelines for data engineering and analytics workflows.
9. Conduct regular performance monitoring and tuning of data systems to ensure optimal efficiency and stability.
10. Peer Review of team members' deliverables
11. Responsible for change and release management.
12. Stay updated with the latest advancements and trends in the retail domain, data architecture, and programming languages to drive
continuous improvement.
Requirements:
1. At least 8+ years of experience in the data engineering domain
2. Proven experience as a Data Architect, preferably within the retail industry.
3. Strong programming skills with expertise in PySpark programming and optimisation techniques.
4. Hands-on experience with Databricks, Deltalake and its components for data processing and analytics.
5. Hands-on experience in data modeling, data integration, and ETL/ELT processes.
6. Experience in working with Gitlab pipelines and an in-depth understanding of CI/CD pipeline designs.
7. Experience in at least one of the Clouds (AWS / Azure / GCP), preferably Azure Cloud
8. Experience in Azure Data Engineering services like ADLS, ADF, Synapse etc.
9. Experience with data governance, data quality, and metadata management.
10. Strong SQL experience
11. Strong analytical and problem-solving abilities with a detail-oriented mindset.
12. Excellent communication and collaboration skills to work effectively with cross-functional teams.
13. Ability to adapt to a fast-paced and evolving environment while managing multiple priorities.
Good to have
1. Expertise in Infrastructure as a code
2. Expertise in designing distributed data systems
3. Expertise on Software Engineering

What We Offer  

1. Competitive Salaries with flexi benefits  

2. Group Mediclaim Insurance and Personal Accidental Policy 

3. 40+ Paid Leaves in a year  

4. Learning and Development of quarterly budgets for certification