Data Science Intern Job Description

Data Science Intern Job Description Template

Our company is looking for a Data Science Intern to join our team.

Responsibilities:

  • Work with stakeholders throughout the organization to identify opportunities for leveraging;
  • Present your findings to leaders spanning a wide range of technical expertise;
  • Build classifiers, clustering algorithms, and other machine learning models;
  • Define reliable, controlled, automated analytic processes;
  • Scale and grow Coupa’s big data platform;
  • Develop data-driven objectives to assess risks and controls;
  • Analyze healthcare data from disparate data sources;
  • Work in an agile environment where quick iterations and good feedback is a way of life;
  • Contribute to the ideation and brainstorming processes required to develop game-changing products and programs in healthcare;
  • Execute analytical work such as segmentation, forecasting, simulation and mathematical programming,
  • Document and present results;
  • Prioritizes and organizes own work to meet deadlines;
  • Continuously look for, and execute upon, opportunities to improve the quality of our data, infrastructure and products;
  • Translate complex findings into compelling narratives;
  • Design and develop data products to assist consumers with decisions around investments, financial planning, and advice.

Requirements:

  • Experience using statistical computer languages (R, Python, SQL, etc.) to query and manipulate;
  • Strong problem-solving skills with an emphasis on product development;
  • Undergraduate Level Paid Internship;
  • Ability to work on computer for long periods of time;
  • Part-Time, 25 hours/week, timing and start date flexible;
  • Experience with web application development;
  • Experience working with large data sets and data analytics;
  • Undergraduate or Graduate student seeking a STEM related degree;
  • You will graduate in approximately a year and will seriously consider industry jobs, not just academia;
  • Technologies used: Python, Pandas, Spark, AWS (EMR, EC2, RDS, S3), Jupyter, Docker, Git;
  • You can present something that you have built firsthand and are proud of (e.g., school project, thesis, independent project);
  • Programming competency in Python, R, and SQL is a plus;
  • Part-Time, 25 hours/week, timing and start date flexible;
  • Preferred: Experience with data storage and querying technologies such as Spark, Hadoop;
  • Experience translating research questions or business requirements into actionable work.