Average salary: $145,318 /yearly
More statsGet new jobs by email
- ...following technologies: Extraction & Logic Kafka, ANSI SQL, FTP, Apache Spark Data Formats JSON, Avro, Parquet Platforms Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ Core Competencies: Demonstrates strong integrity and consistently models...Suggested
- ...inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Python AND Kafka AND (Hadoop OR HDFS OR Hive) AND Snowflake AND apache AND (iceberg to join our team in Bangalore, Karnātaka (IN-KA), India (IN). "**Basic...SuggestedWork at officeRemote workFlexible hours
- ...inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Python AND Kafka AND (Hadoop OR HDFS OR Hive) AND Snowflake AND apache AND (iceberg to join our team in Bangalore, Karnātaka (IN-KA), India (IN)....SuggestedWork at officeRemote workFlexible hours
- SIX Software Engineers ( Java, Spring, Spring boot, Apache NiFi, Maven, Hadoop, MongoDB, MS Access, Accumulo) in Baltimore, MD Accumulo, Apache Nifi, Hadoop, Java, Maven, MS Access, Spring Boot, Spring Frameworks Location: Maryland Job Function: Software Development...SuggestedFull timeRemote work
- Sr Developer (Machine Learning, Distributed Ledger Technology, NLP, Image Analytics, Python, R, Hadoop, AWS) in McLean, VA AWS, Distributed Ledger Technology, Hadoop, Image Analytics, Java, Machine Learning, Natural Language Processing, Python, R Location: Virginia...SuggestedPermanent employmentFull timeRemote workRelocation
- Senior Data Quality Assurance Engineer (Informatica, Tableau, QlikView, Quality Center, Data Strategies, Data Delivery, Hadoop) in Charlotte, NC Data Delivery, data quality, Data Quality Assurance Engineer, Data Strategies, Hadoop, Informatica, Jenkins, QlikView, Quality...SuggestedPermanent employmentFull timeWork experience placementRemote work
- Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API, Handling, API Development, Data Migration, Batch Data Pipelines) in Charlotte, NC API Development, AWS, AWS Lambda, Data Migration, Hadoop, Java, Oracle, Snowflake, SQL Server...SuggestedFull timeLocal areaImmediate startRemote work
- Data Warehouse Architect (Hadoop, Erwin, Oracle, Cognos, Tableau, R, Python) in Pittsburgh or Cleveland Business Objects, Cloudera, Cognos, Data Warehouse Architecture, Hadoop, OBIEE, Python, R, Tableau Location: Pennsylvania Job Function: Data Warehouse Architect...SuggestedPermanent employmentFull timeWork experience placementRemote workFlexible hours
- ...monitoring, performance tuning, and capacity planning. Distributed Systems: Strong hands-on experience with Spark, Flink, and Kafka . Hadoop Ecosystem: Proficiency in Hadoop Cluster Administration and Operations. Cloud & Containers: Deep understanding of AWS and...SuggestedWork at officeHome office
- ...experience with ETL/ELT tools (e.g., Informatica, Talend, Apache NiFi). Experience with big data technologies (e.g., Apache Spark, Hadoop ecosystem). Proficiency in data warehousing concepts (e.g., Snowflake, Redshift, BigQuery). Experience with cloud...Suggested
- ...information retrieval concepts, relevance metrics, and evaluation methods. Familiarity with large-scale data processing (e.g., Spark, Hadoop) is a plus. Key Responsibilities: Design, develop, and implement search ranking models using Learn to Rank approaches....SuggestedImmediate start
- ...Role Overview We are looking for a highly experienced Senior Data Engineer with strong hands‑on expertise in PySpark, Python, Hadoop, ETL, RDBMS, and Unix as primary skills. The role also requires exposure to GCP Vertex AI and Agentic AI as secondary skills,...SuggestedWork at officeRemote workFlexible hours
- ...programming and 3+ years of MLOps experience in production environments. ~5+ years with Big Data platforms such as BigQuery or Hadoop and 3+ years with PySpark. ~2+ years building APIs, preferably with FastAPI, and integrating with Google Cloud Platform/Azure or...SuggestedInternship3 days per week
- ...experimental design (A/B testing) Strong data engineering capabilities including SQL/NoSQL database programming, distributed computing tools (Hadoop, Spark, Kafka), data pipeline development, and experience with cloud platforms (AWS, Azure, Google Cloud Platform) Production ML...Suggested
- ...Tech stack. Experienced with Infrastructure as Code (IaC). Experience with big data technologies such as Apache Spark or Hadoop. Stay informed about the ethical implications of machine learning eg: selection bias. Model Training Data Analytics...SuggestedContract work
- ...with cloud platforms (AWS, Azure, or Google Cloud Platform). Knowledge of ETL/ELT pipelines and big data technologies (Spark, Hadoop). Experience with AI/ML frameworks (TensorFlow, PyTorch, or similar). Strong programming skills (Python, SQL, or Scala)....Full timeRemote work
- ...drift monitoring. Use Azure Data Factory (ADF) and Azure Databricks for orchestrated, scalable data processing; use AWS EMR for Hadoop/Spark workloads supporting AI features. Build Agentic AI Solutions Design secure tool-calling and multi-agent orchestration...Local area
- ...Cloud. Familiarity with version control (Git) and CI/CD for data pipelines. Familiarity with Big Data technologies such as Hadoop, Spark, and Kafka is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration...Contract workWork at officeLocal area1 day per week
- ...Experience with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets....Local areaRemote work
- ...Data Engineer (pyspark, Hadoop, Scala) - Walmart - Sunnyvale, CA (hybrid ) Skill Set: Can perform Pyspark, Hadoop, Scala, ETL, Day to Day: Working on Walmart signals and table. Preparing and processing raw data from users and creating tables for Walmart....
- ...Develop and optimize ETL/ELT processes, data flows, and infrastructure leveraging AWS, SQL, and platforms such as Databricks, Redshift, Hadoop and Airflow. Assemble and manage large, complex datasets to meet functional and non-functional business requirements....Long term contractRemote work
- ...with Google Cloud Platform (Google Cloud Platform) , especially BigQuery Prior experience with Teradata Familiarity with Hadoop ecosystem Exposure to tools such as Dremio and distributed storage systems Cloud certifications (Google Cloud Platform preferred...Contract workVisa sponsorship
- ...SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting opportunity...Contract work
- ...solutions in virtualized environments. ~ Experience in one or more of the following: Networking, Devops, Security, Compute, Storage, Hadoop, Kubernetes, or SRE. ~ Networking - Experience delivering end-to-end networking designs and architectures that include DNS,...Contract workRemote work
- ...and container tools (Docker, Kubernetes). ~ Understanding of data engineering, ETL pipelines, and big data technologies (Spark, Hadoop, Kafka). ~ Proficiency with software engineering best practices (CI/CD, Git, testing, modular design). ~ Strong communication and...
- ...platforms. Ideally will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc. Responsibilities : Design, develop, and maintain large-scale data processing pipelines using Big Data technologies...Contract workWork experience placementWork at officeLocal area
- ...lead and coach junior developers Desired Skills & Experience Familiarity with distributed data/computing tools (e.g., Hadoop, Hive, Spark, MySQL). Background in financial business-like banking, risk management. Experience with AI Dev tools such...Contract workWorldwide2 days per week3 days per week
- ...Build and enhance applications using Java, Python, and big data technologies Develop and optimize data processing systems within the Hadoop ecosystem Analyze, troubleshoot, and resolve software issues from internal and external stakeholders Document system architecture,...
- ...Proficiency: Hands-on experience with production-grade data solutions (relational and NoSQL) and preferably big data frameworks like Hadoop or Spark. Technical Communication: Excellent ability to articulate complex technical concepts to both technical and non-...Local areaImmediate startRemote workShift work
$79 - $85 per hour
...Agentic AI frameworks: LangGraph, LangChain, A2A Programming Languages: Java, Scala, SQL, HiveQL Big Data Technologies: Hadoop, Spark, HDFS, Hive, Cloudera, Hortonworks Cloud Platforms: AWS (Glue, Lambda, Redshift, S3, CloudWatch) ETL / ELT Tools: AWS...Hourly payFull timeContract workWork at office3 days per week