Average salary: $144,490 /yearly
More statsGet new jobs by email
- ...information retrieval concepts, relevance metrics, and evaluation methods. Familiarity with large-scale data processing (e.g., Spark, Hadoop) is a plus. Key Responsibilities: Design, develop, and implement search ranking models using Learn to Rank approaches....SuggestedImmediate start
- ...Frameworks: Java, Python, PySpark Databases: SQL (Oracle, PostgreSQL), NoSQL (MongoDB, Cassandra), Snowflake Big Data Tools: Spark, Hadoop, Hive DevOps & CI/CD: Git, Jenkins, Docker, Kubernetes, Terraform, Airflow Cloud: AWS / Azure (as applicable) Performance Tuning:...SuggestedFull timeContract workPart timeInternshipSeasonal work
$110.5k - $149.5k
.... Knowledge of NLP, computer vision, or reinforcement learning techniques applied to defense. Familiarity with Apache Spark, Hadoop, and distributed data frameworks. Background in real-time messaging protocols (e.g., MQTT, ZeroMQ, ROS) for edge-to-core communication...SuggestedFull timeTemporary workPart timeLocal areaImmediate startWorldwideFlexible hours- ...experimental design (A/B testing) Strong data engineering capabilities including SQL/NoSQL database programming, distributed computing tools (Hadoop, Spark, Kafka), data pipeline development, and experience with cloud platforms (AWS, Azure, Google Cloud Platform) Production ML...Suggested
- ...Tech stack. Experienced with Infrastructure as Code (IaC). Experience with big data technologies such as Apache Spark or Hadoop. Stay informed about the ethical implications of machine learning eg: selection bias. Model Training Data Analytics...SuggestedContract work
- ...drift monitoring. Use Azure Data Factory (ADF) and Azure Databricks for orchestrated, scalable data processing; use AWS EMR for Hadoop/Spark workloads supporting AI features. Build Agentic AI Solutions Design secure tool-calling and multi-agent orchestration...SuggestedLocal area
- ...programming and 3+ years of MLOps experience in production environments. ~5+ years with Big Data platforms such as BigQuery or Hadoop and 3+ years with PySpark. ~2+ years building APIs, preferably with FastAPI, and integrating with Google Cloud Platform/Azure or...SuggestedInternship3 days per week
- ...with cloud platforms (AWS, Azure, or Google Cloud Platform). Knowledge of ETL/ELT pipelines and big data technologies (Spark, Hadoop). Experience with AI/ML frameworks (TensorFlow, PyTorch, or similar). Strong programming skills (Python, SQL, or Scala)....SuggestedFull timeRemote work
- ...Data Engineer (pyspark, Hadoop, Scala) - Walmart - Sunnyvale, CA (hybrid ) Skill Set: Can perform Pyspark, Hadoop, Scala, ETL, Day to Day: Working on Walmart signals and table. Preparing and processing raw data from users and creating tables for Walmart....Suggested
- ...Experience with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets....SuggestedLocal areaRemote work
- ...Cloud. Familiarity with version control (Git) and CI/CD for data pipelines. Familiarity with Big Data technologies such as Hadoop, Spark, and Kafka is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration...SuggestedContract workWork at officeLocal area1 day per week
- ...Develop and optimize ETL/ELT processes, data flows, and infrastructure leveraging AWS, SQL, and platforms such as Databricks, Redshift, Hadoop and Airflow. Assemble and manage large, complex datasets to meet functional and non-functional business requirements....SuggestedLong term contractRemote work
- ...with Google Cloud Platform (Google Cloud Platform) , especially BigQuery Prior experience with Teradata Familiarity with Hadoop ecosystem Exposure to tools such as Dremio and distributed storage systems Cloud certifications (Google Cloud Platform preferred...SuggestedContract workVisa sponsorship
- ...solutions in virtualized environments. ~ Experience in one or more of the following: Networking, Devops, Security, Compute, Storage, Hadoop, Kubernetes, or SRE. ~ Networking - Experience delivering end-to-end networking designs and architectures that include DNS,...SuggestedContract workRemote work
- ...SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting opportunity...SuggestedContract work
- ...lead and coach junior developers Desired Skills & Experience Familiarity with distributed data/computing tools (e.g., Hadoop, Hive, Spark, MySQL). Background in financial business-like banking, risk management. Experience with AI Dev tools such...Contract workWorldwide2 days per week3 days per week
- ...Build and enhance applications using Java, Python, and big data technologies Develop and optimize data processing systems within the Hadoop ecosystem Analyze, troubleshoot, and resolve software issues from internal and external stakeholders Document system architecture,...
$79 - $85 per hour
...Agentic AI frameworks: LangGraph, LangChain, A2A Programming Languages: Java, Scala, SQL, HiveQL Big Data Technologies: Hadoop, Spark, HDFS, Hive, Cloudera, Hortonworks Cloud Platforms: AWS (Glue, Lambda, Redshift, S3, CloudWatch) ETL / ELT Tools: AWS...Hourly payFull timeContract workWork at office3 days per week$71.42 - $90.22 per hour
...infrastructure tooling, including Ansible, Chef, Terraform, Jenkins, Docker, Kubernetes Experience with big data technologies: Apache Hadoop, Hive, Spark ecosystem Familiarity with Google Cloud infrastructure and security Experience with Java and Spring Boot,...Contract workRemote work- ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis Need...Contract workWork experience placementLocal area3 days per week
- ...platforms. Ideally will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc. Responsibilities : Design, develop, and maintain large-scale data processing pipelines using Big Data technologies...Contract workWork experience placementWork at officeLocal area
- ...Solid Angular frontend experience Hands-on AWS experience (Lambda, ECS, S3, RDS, EMR) Big Data experience is required Hadoop, Spark, Presto, or EMR . WHAT YOULL DO: Design and build full stack applications using Java, Angular, and AWS Partner...Currently hiringLocal area
- ...If you have any queries about our services. Job Description Preferred At least 2 years of experience in Big Data space. Strong Hadoop - MAP REDUCE/Hive/Pig/SQOOP/OOZIE - MUST Candidate should have hands on experience with Java, APIs, spring - MUST Good exposure to...
- ...Microsoft Azure Automation & Configuration: Ansible, Chef Databases & Data Platforms: Oracle, SQL Server, PostgreSQL, Hadoop, Spark, NoSQL Security: Infrastructure security architecture, compliance, policies, and controls Resilience: Backup...
- ...in solving the most complex business challenges and delivering transformation at scale. For more information, please visit Title: Hadoop Administrator Location: Tampa, FL (Hybrid - 3days) Deploying maintaining the Hadoop cluster adding and removing nodes using cluster...Work at officeLocal area
- ...Foundation : Strong understanding of statistics, linear algebra and calculus as applied to ML. Big Data : Experience with Spark, Hadoop or similar distributed computing frameworks. Cloud Platforms : Familiar with AI services on AWS, Azure or Google Cloud...
- ...Airflow) Knowledge of real-time inference and batch processing systems Familiarity with big data technologies (Spark, Kafka, Hadoop) Experience with containerization (Docker, Kubernetes) Background in applied AI domains (NLP, computer vision,...Remote work
- Hadoop/Analytics Developer - HYBRID (JACKSONVILLE, FL) ARC Group has an immediate opportunity for a Hadoop Developer! This is starting out as a 12 month contract position with potential to extend longer or possibly convert to FTE. The position is hybrid, with a couple...Permanent employmentContract workWork experience placementImmediate start
- A recruiting and consulting agency is looking for a Hadoop/Analytics Developer in Jacksonville, FL. This hybrid role requires expertise in Hadoop, Spark, and Python, with at least 3 years of relevant experience in IT design and coding. The developer will work on managing...
- ...Sparkpy, and Apache Flink for real‑time and batch processing of large datasets Implement and manage data storage solutions leveraging Hadoop Distributed File System (HDFS) and data lake table formats such as Apache Iceberg Build and optimize data models and schema...Full timeTemporary workLocal area