Providing Data Engineering solutions from raw data extraction to curated models. Focus areas are designing data warehouse models and development of data ingestion and transformation workflows to push data into the models. Every implementation needs to be tested according to pre-defined criteria and in compliance with our Data architecture.
Skills
- Azure Databricks
- Azure Devops
- PySpark Frameworks
- ETL
- Data Engineering
Requirements
- Job Role Data Engineer
- Job Type Full Time
- Workplace Type Onsite
- Industry
Information Technology & Services
Secondary locations
Not provided
Responsibilities
- Lead a team of data engineers and guide them with the best data strategies in line with data needs.
- Design, develop, and optimize end-to-end data workflows with Databricks and Azure Data Factory to support data ingestion, transformation, and loading processes.
- Leverage Azure Data Factory to build scalable data pipelines and orchestrate workflows that efficiently connect various data sources to the data lake.
- Use Spark (PySpark or Spark SQL) and Python to build high-performance data workflows, following best practices for coding standards and efficiency.
- Continuously monitor and optimize data processing performance, identifying and resolving bottlenecks to ensure reliability and high availability.
- Develop and maintain data processing scripts using Python for automation, data manipulation, and transformation tasks.
- Apply SQL expertise to write efficient queries for data extraction, transformation, and analysis, ensuring optimized performance across large datasets.
- Develop and manage data warehousing solutions to enable efficient data storage, retrieval, and analysis of large datasets.
- Design schemas and data models using SQL for data warehousing that support both transactional and analytical workloads.
- Apply industry best practices for data warehousing, ensuring data quality, integrity, and accessibility.
- Collaborate with technical and business stakeholders to understand data requirements and translate them into technical solutions.
- Monitoring of data workloads taking corrective actions and pro-active communication with team.
Other Requirements
- Bachelor’s degree in computer science or a closely related field is required.
- Minimum 4 years of experience in working with Data Engineering projects.
- Strong experience in Python, SQL and PySpark.
- Strong experience in designing, implementing, and maintaining data solutions on Azure Databricks.
- Strong experience in designing, implementing and maintaining data pipelines and orchestrations on Azure Data Factory.
- Strong knowledge in Data modelling and warehousing fundamentals.
- Experience with data modelling and data security concepts.
Good to have
- Experience with Retail industry domain is desirable.
- Understanding of CI/CD pipelines and Azure Devops tools is desirable.
- Understanding of Agile development methodologies is desirable.
- Strong communication and teamwork skills.
- Willingness to travel – domestic / international.
- Knowledge of ISMS principles and best practices.
About the Company
We at Tamcherry - Zaportiv delivers multiple types of tailored solutions to our customers depending on their requirements.
The perfect blend of mature processes, flexible delivery models, effective project management, broad technology, and domain expertise enable Tamcherry ZAPORTIV to provide best-in-class delivery to our customers
We boast of an eminent lineup of well-seasoned and vetted professionals who have the real potential to support your business growth