Big Data Engineer Job Description Template
Our company is looking for a Big Data Engineer to join our team.
Responsibilities:
- Data governance design and meta-data and data tag management with robust data encryption and data sharing capabilities;
- Defining data retention policies;
- Read, extract, transform, stage and load data to multiple targets, including Hadoop, Hive, BigQuery;
- Design and develop applications utilizing the Spark and Hadoop Frameworks or GCP components;
- Demonstrate a high level of curiosity and keep abreast of the latest technologies;
- Regularly required to use hands to finger, handle or feel objects, tools or controls, and reach with hands or arms;
- Collaborating with peers and seniors both within their team and across the organization;
- Occasionally required to stand, kneel or stoop, and lift and/or move up to 25 pounds;
- Design, develop, test, and debug large scale complex platform using big data technologies;
- Thorough knowledge of agile software delivery models (Kanban, Scrum, Less, SAFe, etc.);
- Assemble large, complex data sets that meet functional / non-functional business requirements;
- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products;
- Work with data and analytics experts to strive for greater functionality in our data systems;
- Work with stakeholders from both the business and technology to assist with data-related technical issues and support their data infrastructure needs;
- Design, build and launch extremely efficient & reliable data pipelines to move data (both large and small amounts) to our data warehouses.
Requirements:
- Devops, Kubernetes, Docker containers;
- Experience with design and coding in Big Data environment;
- Hands-on expertise with application design, software development and automated testing;
- Knowledge and ability to implement workflow/schedulers within Oozie;
- Experience building big data solutions using Hadoop technologies;
- Healthcare experience preferred;
- Familiarity with and implementation knowledge of loading data using Sqoop;
- Excellent self-management and problem-solving skills;
- Hands on experience with Microsoft BI stack, Power BI;
- Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field;
- Analytical and problem solving skills, applied to Big Data domain;
- Extensive knowledge of UNIX/Linux;
- Understanding and implementation of Flume processes;
- Can handle several projects with different priorities at the same time in a fast-paced environment;
- Demonstrated communication and collaboration skills, and an ability to work with team members to learn and grow.