Get new jobs by email
- ...nativesoftware solutions and architectures on Azure, AWS, or GCP 6+ years working with Big Datatechnologies such as Apache Spark, Hadoop, Cassandra, and distributedcompute frameworks Hands‑on experience designingdata platforms using Databricks, Microsoft Fabric, and...SuggestedImmediate start
- ...analytic tools like R & Python & visualization tools like Qlik or Tableau Exposure to cloud platforms and big data systems such as Hadoop HDFS, Hive is a plus Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes...SuggestedWork experience placementLocal area
- ...Scripting (such as Python or other); Expertise working with large-scale big data systems, architectures, and implementation - Splunk, Hadoop, or Cassandra. Working experience with NoSQL data platforms (Cassandra or MongoDB). Must have development experience. Ansible,...SuggestedWork experience placement
- ...data aggression, shards, replicas etc. Must Have: Development Experience with ElasticSearch (splunk is similar) Big Data experience (Hadoop, MongoDB, Cassandra, Kafka, Splunk) Qualifications & Expertise: Logstash: Manipulate input data through pipelines, filters and...Suggested
- ...testable and scalable code. Responsibilities Designing the environment, defining implementation steps, and driving the transition from Hadoop to a new open-source solution. Qualifications Bachelor's degree or equivalent experience in Computer Science or related field Open-...SuggestedContract work
- ...experience with ETL/ELT tools (e.g., Informatica, Talend, Apache NiFi). Experience with big data technologies (e.g., Apache Spark, Hadoop ecosystem). Proficiency in data warehousing concepts (e.g., Snowflake, Redshift, BigQuery). Experience with cloud...Suggested
- ...information retrieval concepts, relevance metrics, and evaluation methods. Familiarity with large-scale data processing (e.g., Spark, Hadoop) is a plus. Key Responsibilities: Design, develop, and implement search ranking models using Learn to Rank approaches....SuggestedImmediate start
- ...monitoring, performance tuning, and capacity planning. Distributed Systems: Strong hands-on experience with Spark, Flink, and Kafka . Hadoop Ecosystem: Proficiency in Hadoop Cluster Administration and Operations. Cloud & Containers: Deep understanding of AWS and...SuggestedWork at officeHome office
- ...provider in Wisconsin is looking for a Mid-Senior level professional to develop high-quality applications and drive the transition from Hadoop to next-gen open-source solutions. The ideal candidate will have a Bachelor's degree in Computer Science and be proficient in...Suggested
- ...customizing Splunk applications and dashboards, integrating with external systems, and working with big data architectures such as Hadoop and Cassandra. Candidates should have at least 6 years of experience along with strong skills in Python and Agile practices. This position...Suggested
$180k - $247.5k
...public cloud providers such as AWS, Azure, or GCP. Expertise in one of the following: Data Engineering technologies (Ex: Spark, Hadoop, Kafka) Data Science and Machine Learning technologies (Ex: pandas, scikit‑learn, pytorch, Tensorflow) Available to travel to customers...SuggestedLocal area- ...Tech stack. Experienced with Infrastructure as Code (IaC). Experience with big data technologies such as Apache Spark or Hadoop. Stay informed about the ethical implications of machine learning eg: selection bias. Model Training Data Analytics...SuggestedContract work
- ...experimental design (A/B testing) Strong data engineering capabilities including SQL/NoSQL database programming, distributed computing tools (Hadoop, Spark, Kafka), data pipeline development, and experience with cloud platforms (AWS, Azure, Google Cloud Platform) Production ML...Suggested
- ...programming and 3+ years of MLOps experience in production environments. ~5+ years with Big Data platforms such as BigQuery or Hadoop and 3+ years with PySpark. ~2+ years building APIs, preferably with FastAPI, and integrating with Google Cloud Platform/Azure or...SuggestedInternship3 days per week
- ...drift monitoring. Use Azure Data Factory (ADF) and Azure Databricks for orchestrated, scalable data processing; use AWS EMR for Hadoop/Spark workloads supporting AI features. Build Agentic AI Solutions Design secure tool-calling and multi-agent orchestration...SuggestedLocal area
- ...Develop and optimize ETL/ELT processes, data flows, and infrastructure leveraging AWS, SQL, and platforms such as Databricks, Redshift, Hadoop and Airflow. Assemble and manage large, complex datasets to meet functional and non-functional business requirements....Long term contractRemote work
- ...Experience with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets....Local areaRemote work
- ...Experience with cloud platforms (AWS, Azure, or Google Cloud Platform). Knowledge of ETL/ELT pipelines and big data technologies (Spark, Hadoop). Experience with AI/ML frameworks (TensorFlow, PyTorch, or similar). Strong programming skills (Python, SQL, or Scala). Experience...Full timeRemote work
- ...Data Engineer (pyspark, Hadoop, Scala) - Walmart - Sunnyvale, CA (hybrid ) Skill Set: Can perform Pyspark, Hadoop, Scala, ETL, Day to Day: Working on Walmart signals and table. Preparing and processing raw data from users and creating tables for Walmart....
- ...Cloud. Familiarity with version control (Git) and CI/CD for data pipelines. Familiarity with Big Data technologies such as Hadoop, Spark, and Kafka is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration...Contract workWork at officeLocal area1 day per week
- ...with Google Cloud Platform (Google Cloud Platform) , especially BigQuery Prior experience with Teradata Familiarity with Hadoop ecosystem Exposure to tools such as Dremio and distributed storage systems Cloud certifications (Google Cloud Platform preferred...Contract workVisa sponsorship
- ...SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting opportunity...Contract work
- ...solutions in virtualized environments. ~ Experience in one or more of the following: Networking, Devops, Security, Compute, Storage, Hadoop, Kubernetes, or SRE. ~ Networking - Experience delivering end-to-end networking designs and architectures that include DNS,...Contract workRemote work
- ...Build and enhance applications using Java, Python, and big data technologies Develop and optimize data processing systems within the Hadoop ecosystem Analyze, troubleshoot, and resolve software issues from internal and external stakeholders Document system architecture,...
- ...platforms. Ideally will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc. Responsibilities : Design, develop, and maintain large-scale data processing pipelines using Big Data technologies...Contract workWork experience placementWork at officeLocal area
- ...lead and coach junior developers Desired Skills & Experience Familiarity with distributed data/computing tools (e.g., Hadoop, Hive, Spark, MySQL). Background in financial business-like banking, risk management. Experience with AI Dev tools such...Contract workWorldwide2 days per week3 days per week
$79 - $85 per hour
...Agentic AI frameworks: LangGraph, LangChain, A2A Programming Languages: Java, Scala, SQL, HiveQL Big Data Technologies: Hadoop, Spark, HDFS, Hive, Cloudera, Hortonworks Cloud Platforms: AWS (Glue, Lambda, Redshift, S3, CloudWatch) ETL / ELT Tools: AWS...Hourly payFull timeContract workWork at office3 days per week$71.42 - $90.22 per hour
...infrastructure tooling, including Ansible, Chef, Terraform, Jenkins, Docker, Kubernetes Experience with big data technologies: Apache Hadoop, Hive, Spark ecosystem Familiarity with Google Cloud infrastructure and security Experience with Java and Spring Boot,...Contract workRemote work- ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis...Contract workWork experience placementLocal area3 days per week
- ...Microsoft Azure Automation & Configuration: Ansible, Chef Databases & Data Platforms: Oracle, SQL Server, PostgreSQL, Hadoop, Spark, NoSQL Security: Infrastructure security architecture, compliance, policies, and controls Resilience: Backup...