Analytics Engineer

Analytics Engineer Job Description Template

Our company is looking for a Analytics Engineer to join our team.


  • Work on all aspects of the design, development, validation, scaling and delivery of analytical solutions;
  • This role is located in St. Louis, MO and relocation will be available for qualified candidates;
  • Tune data workers and algorithms to scale horizontally;
  • Collaborate with APC and P&E to streamline data storage and access patterns;
  • Work on the deployment, delivery and expansion of Analytics breeding algorithms;
  • Manage analytics activities based on harvest schedule;
  • Work with breeding/regional leads to maintain analytics harvest schedule and deliver all advancement related analysis;
  • Act as point of contact for domain/data related issues;
  • Own and modify algorithms to expand to multiple crops and regions with minimal Data Scientist involvement;
  • Collaborate with interdisciplinary scientists to gather requirements for data pipelines;
  • Visa sponsorship may be offered for this role;
  • Build data expertise, best practices and own data quality for all analytical data needs;
  • Maintaining and improving our Looker model to ensure that stakeholders can access the data they need in a clear and reliable way;
  • Managing user roles and permissions for Redshift and Looker;
  • Integrating and productionizing analyst and data science models as needed.


  • Working knowledge of data visualization tools such as Tableau, QlikView, or Domo is preferred;
  • Experience conducting requirements analysis, meeting with business stakeholders and applying solutions to customer challenges;
  • Strong leadership and customer engagement skills;
  • Overall knowledge of MHE technologies and warehouse systems or similar domains is preferred;
  • Education;
  • Working knowledge of T-SQL is required;
  • Working knowledge of advanced analytic tools such as SAS, R, or Python is required;
  • Working knowledge of cloud based technologies is preferred;
  • Write SQL that is performant, iterable and easy to debug (dbt experience is a bonus );
  • Minimum of a Master’s Degree in Computer Science, Electrical Engineering or a closely-related field;
  • Experience working with large data sets;
  • Experience working with distributed computing tools (Map/Reduce, Hadoop, Hive, HBase, Scala, Spark etc.);
  • Design the data model to enable fast and accurate analysis;
  • 5 + years of relevant software development experience;
  • 2 + years of Network and Database administration, R, Python, Java and/or Scala.