Data Engineer (Senior/Lead)
Experience: 4 to 10 Yrs
Salary Range: Best in Industry Standards
Position Summary:
As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.
Mandatory Skills – Python | SQL | Data Warehousing – Snowflake, Redshift, Google BigQuery, RDS (Any 1) | ETL – Data Factory, AWS Glue (Any 1) | Communication Skills | Team Handling
Add-On Skills – dbt | Denodo | Wherescape | Informatica | AWS DMS | AWS Lambda | AWS ECS | Docker | AWS SNS | AWS SQS | AWS Kinesis | Power BI (or any other relevant tool)
Requirements
- Approximately 4 years of experience in data warehousing and BI systems.
- Extensive hands-on experience with Snowflake and strong programming skills in Python.
- Proficiency in SQL.
- Experience with any cloud databases such as Snowflake, Redshift, Google BigQuery, RDS, etc.
- Knowledge of dbt for cloud databases.
- Experience with AWS services including SNS, SQS, ECS, Docker, Kinesis, and Lambda functions.
- Strong understanding of ETL processes and data warehousing concepts.
- Experience with version control systems (e.g., Git, Bitbucket) and collaborative development practices in an agile framework.
- Experience with Scrum methodologies.
- Knowledge of Denodo, data cataloging tools, and data quality mechanisms is a plus.
- Strong team player with excellent communication skills.
- Certifications in AWS, Azure, or Snowflake are highly desirable.
If this excites you, Please fill the Form to start the application process and we will be in touch