Data Engineer II

Data Engineer II Job Description Template

Our company is looking for a Data Engineer II to join our team.

Responsibilities:

  • Partner with business analyst to define, develop, and automate data quality checks;
  • Design, build, and maintain tools to increase the productivity of application development and client facing teams;
  • Design and develop highly scalable and extensible data pipelines from internal and external sources;
  • Provide in-depth troubleshooting skills to assist in resolving errors and performance issues including tier 2 production support;
  • Perform optimization of data environments using techniques like intelligent sampling and caching;
  • Engage in conversations with internal users to understand their line of business;
  • Focus on performance tuning, optimization and scalability to ensure efficiency in the environment;
  • Track data consumption patterns to proactively identify new requirements or refinements, and report against preset baselines;
  • Build the required data and reporting pipelines using the internal ETL tools;
  • Implement processes to ensure data quality in the database systems;
  • Design, implement, lead and manage large-scale, enterprise-wide and complex projects;
  • Design, architect, and implement new source of truth datasets, in partnership with analytics and business teams;
  • Assess impact of schema changes across the managed databases to ensure data quality and that resources and aggregations remain accurate;
  • Understand the data landscape of our products;
  • Work on cross-functional teams to design, develop, and deploy data-driven applications and products, particularly within the space of data science.

Requirements:

  • Strong communication and cross group collaboration skills;
  • BSCS or MSCS in relevant field;
  • Experience with Linux/UNIX to process large data sets;
  • Ability to deal with ambiguity;
  • 2+ yrs of experience with one or more modern programming language (Python, Ruby, Java, etc;
  • Experience using Agile/Scrum methodologies to iterate quickly on product changes, developing user stories and working through backlogs;
  • Previous leadership experience (ex: Scrum Master in a Scrum team) is a value add;
  • 7+ years of experience with detailed knowledge of data warehouse technical architectures, ETL/ ELT, reporting/analytic tools, and data security;
  • Experience in working with Cloud BI Production implementation;
  • Ability to design, manage, and implement simple data flows and information architectures for financial institutions;
  • Knowledge of popular data discovery, analytics and BI software tools like Microsoft PowerBI for semantic-layer-based data discovery;
  • Ability to working with data science teams in refining and optimizing data science and machine learning models and algorithms;
  • Advanced knowledge of relational database systems, columnar databases and NoSQL;
  • 2+ years EIM development experience;
  • Have a broad range of knowledge across Big Data, Azure, Data Vault, Tableau and other new technologies.