AWS Data Engineer Job Description

AWS Data Engineer Job Description Template

Our company is looking for a AWS Data Engineer to join our team.

Responsibilities:

  • Participate in efforts to design, build, and develop rapid Proof-of-Concept (POC) solutions and services;
  • Build applications using Python, SQL, Databricks and AWS;
  • Be a key team member in design and development of the Marketing product team;
  • Working understanding of Agile, Scrum, Design Thinking, and Lean Startup principles;
  • Apply knowledge of basic principles, methods and practices to simple and moderately complex assignments;
  • Proactively identify and implement opportunities to automate tasks and develop reusable frameworks;
  • Act as a run manager, provide Run/DevOps support;
  • Adhere to standard methodologies for coding, testing and designing reusable code/component;
  • Participate in sprint planning meetings and provide estimations on technical implementation;
  • Contributed to the exploration and understanding of new tools and techniques, and propose improvements to the data pipeline;
  • Work as a data engineer within the US Value & Access IS team that uses several Data, Search and AWS technologies;
  • Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAs;
  • Collaborate with the other engineering team members to ensure all services are reliable, maintainable, and well-integrated into existing platforms;
  • Review functional and technical designs to identify areas of risk and/or missing requirements;
  • Provide technical write-ups and drawings to promote the proposed solutions.

Requirements:

  • 3-5 years experience with automation of DevOps build using GitLab/Bitbucket/Jenkins/Maven;
  • Experience working directly with technical and business teams;
  • 3-5 years experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues;
  • 3-5 years experience with batch job scheduling and identifying data/job dependencies;
  • 3-5 years experience with data engineering using AWS platform and Python;
  • Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway;
  • A Techno-functional role, with a Total experience of 8+ years;
  • Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script;
  • Ability to learn quickly, be organized and detail oriented;
  • Understanding of Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow;
  • Automated testing experience using MUNIT and HP ALM /Selenium/Appium/ SOAP UI;
  • Experience in Software Engineering and Development;
  • Understanding of database schema design;
  • Proficient in one of the coding languages (Python, Java, Scala);
  • Experience in developing and supporting web applications including familiarity with web technologies and frameworks (D3 JS, React.js).