Get new jobs by email
  •  ...nativesoftware solutions and architectures on Azure, AWS, or GCP 6+ years working with Big Datatechnologies such as Apache Spark, Hadoop, Cassandra, and distributedcompute frameworks Hands‑on experience designingdata platforms using Databricks, Microsoft Fabric, and... 
    Suggested
    Immediate start

    Premier Inn Hotels LLC (UAE)

    Wisconsin
    2 days ago
  •  ...analytic tools like R & Python & visualization tools like Qlik or Tableau Exposure to cloud platforms and big data systems such as Hadoop HDFS, Hive is a plus Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes... 
    Suggested
    Work experience placement
    Local area

    Tiger Analytics

    Wisconsin
    2 days ago
  •  ...Scripting (such as Python or other); Expertise working with large-scale big data systems, architectures, and implementation - Splunk, Hadoop, or Cassandra. Working experience with NoSQL data platforms (Cassandra or MongoDB). Must have development experience. Ansible,... 
    Suggested
    Work experience placement

    Big Quest Solutions

    Wisconsin
    4 days ago
  •  ...data aggression, shards, replicas etc. Must Have: Development Experience with ElasticSearch (splunk is similar) Big Data experience (Hadoop, MongoDB, Cassandra, Kafka, Splunk) Qualifications & Expertise: Logstash: Manipulate input data through pipelines, filters and... 
    Suggested

    Skillstalent52938

    Wisconsin
    4 days ago
  •  ...testable and scalable code. Responsibilities Designing the environment, defining implementation steps, and driving the transition from Hadoop to a new open-source solution. Qualifications Bachelor's degree or equivalent experience in Computer Science or related field Open-... 
    Suggested
    Contract work

    Mastech Digital

    Wisconsin
    2 days ago
  •  ...experience with ETL/ELT tools (e.g., Informatica, Talend, Apache NiFi). Experience with big data technologies (e.g., Apache Spark, Hadoop ecosystem). Proficiency in data warehousing concepts (e.g., Snowflake, Redshift, BigQuery). Experience with cloud... 
    Suggested

    Openkyber

    Wisconsin
    4 hours agonew
  •  ...information retrieval concepts, relevance metrics, and evaluation methods. Familiarity with large-scale data processing (e.g., Spark, Hadoop) is a plus. Key Responsibilities: Design, develop, and implement search ranking models using Learn to Rank approaches.... 
    Suggested
    Immediate start

    Openkyber

    Wisconsin
    1 day ago
  •  ...monitoring, performance tuning, and capacity planning. Distributed Systems: Strong hands-on experience with Spark, Flink, and Kafka . Hadoop Ecosystem: Proficiency in Hadoop Cluster Administration and Operations. Cloud & Containers: Deep understanding of AWS and... 
    Suggested
    Work at office
    Home office

    Openkyber

    Wisconsin
    4 hours agonew
  •  ...provider in Wisconsin is looking for a Mid-Senior level professional to develop high-quality applications and drive the transition from Hadoop to next-gen open-source solutions. The ideal candidate will have a Bachelor's degree in Computer Science and be proficient in... 
    Suggested

    Mastech Digital

    Wisconsin
    6 hours agonew
  •  ...customizing Splunk applications and dashboards, integrating with external systems, and working with big data architectures such as Hadoop and Cassandra. Candidates should have at least 6 years of experience along with strong skills in Python and Agile practices. This position... 
    Suggested

    Big Quest Solutions

    Wisconsin
    4 days ago
  • $180k - $247.5k

     ...public cloud providers such as AWS, Azure, or GCP. Expertise in one of the following: Data Engineering technologies (Ex: Spark, Hadoop, Kafka) Data Science and Machine Learning technologies (Ex: pandas, scikit‑learn, pytorch, Tensorflow) Available to travel to customers... 
    Suggested
    Local area

    Databricks Inc.

    Wisconsin
    15 hours ago
  •  ...Tech stack. Experienced with Infrastructure as Code (IaC). Experience with big data technologies such as Apache Spark or Hadoop. Stay informed about the ethical implications of machine learning eg: selection bias. Model Training Data Analytics... 
    Suggested
    Contract work

    Openkyber

    Wisconsin
    4 hours agonew
  •  ...experimental design (A/B testing) Strong data engineering capabilities including SQL/NoSQL database programming, distributed computing tools (Hadoop, Spark, Kafka), data pipeline development, and experience with cloud platforms (AWS, Azure, Google Cloud Platform) Production ML... 
    Suggested

    Openkyber

    Wisconsin
    1 day ago
  •  ...programming and 3+ years of MLOps experience in production environments. ~5+ years with Big Data platforms such as BigQuery or Hadoop and 3+ years with PySpark. ~2+ years building APIs, preferably with FastAPI, and integrating with Google Cloud Platform/Azure or... 
    Suggested
    Internship
    3 days per week

    Openkyber

    Wisconsin
    4 hours agonew
  •  ...drift monitoring. Use Azure Data Factory (ADF) and Azure Databricks for orchestrated, scalable data processing; use AWS EMR for Hadoop/Spark workloads supporting AI features. Build Agentic AI Solutions Design secure tool-calling and multi-agent orchestration... 
    Suggested
    Local area

    Openkyber

    Wisconsin
    2 days ago
  •  ...Develop and optimize ETL/ELT processes, data flows, and infrastructure leveraging AWS, SQL, and platforms such as Databricks, Redshift, Hadoop and Airflow. Assemble and manage large, complex datasets to meet functional and non-functional business requirements.... 
    Long term contract
    Remote work

    Openkyber

    Wisconsin
    3 days ago
  •  ...Experience with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets.... 
    Local area
    Remote work

    Openkyber

    Wisconsin
    2 days ago
  •  ...Experience with cloud platforms (AWS, Azure, or Google Cloud Platform). Knowledge of ETL/ELT pipelines and big data technologies (Spark, Hadoop). Experience with AI/ML frameworks (TensorFlow, PyTorch, or similar). Strong programming skills (Python, SQL, or Scala). Experience... 
    Full time
    Remote work

    Openkyber

    Wisconsin
    1 day ago
  •  ...Data Engineer (pyspark, Hadoop, Scala) - Walmart - Sunnyvale, CA (hybrid ) Skill Set: Can perform Pyspark, Hadoop, Scala, ETL, Day to Day: Working on Walmart signals and table. Preparing and processing raw data from users and creating tables for Walmart.... 

    Openkyber

    Wisconsin
    2 days ago
  •  ...Cloud. Familiarity with version control (Git) and CI/CD for data pipelines. Familiarity with Big Data technologies such as Hadoop, Spark, and Kafka is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration... 
    Contract work
    Work at office
    Local area
    1 day per week

    Openkyber

    Wisconsin
    2 days ago
  •  ...with Google Cloud Platform (Google Cloud Platform) , especially BigQuery Prior experience with Teradata Familiarity with Hadoop ecosystem Exposure to tools such as Dremio and distributed storage systems Cloud certifications (Google Cloud Platform preferred... 
    Contract work
    Visa sponsorship

    Openkyber

    Wisconsin
    4 days ago
  •  ...SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting opportunity... 
    Contract work

    Openkyber

    Wisconsin
    6 days ago
  •  ...solutions in virtualized environments. ~ Experience in one or more of the following: Networking, Devops, Security, Compute, Storage, Hadoop, Kubernetes, or SRE. ~ Networking - Experience delivering end-to-end networking designs and architectures that include DNS,... 
    Contract work
    Remote work

    Openkyber

    Wisconsin
    8 days ago
  •  ...Build and enhance applications using Java, Python, and big data technologies Develop and optimize data processing systems within the Hadoop ecosystem Analyze, troubleshoot, and resolve software issues from internal and external stakeholders Document system architecture,... 

    Openkyber

    Wisconsin
    2 days ago
  •  ...platforms. Ideally will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc. Responsibilities : Design, develop, and maintain large-scale data processing pipelines using Big Data technologies... 
    Contract work
    Work experience placement
    Work at office
    Local area

    Openkyber

    Wisconsin
    1 day ago
  •  ...lead and coach junior developers Desired Skills & Experience Familiarity with distributed data/computing tools (e.g., Hadoop, Hive, Spark, MySQL). Background in financial business-like banking, risk management. Experience with AI Dev tools such... 
    Contract work
    Worldwide
    2 days per week
    3 days per week

    Openkyber

    Wisconsin
    4 hours agonew
  • $79 - $85 per hour

     ...Agentic AI frameworks: LangGraph, LangChain, A2A Programming Languages: Java, Scala, SQL, HiveQL Big Data Technologies: Hadoop, Spark, HDFS, Hive, Cloudera, Hortonworks Cloud Platforms: AWS (Glue, Lambda, Redshift, S3, CloudWatch) ETL / ELT Tools: AWS... 
    Hourly pay
    Full time
    Contract work
    Work at office
    3 days per week

    Openkyber

    Wisconsin
    1 day ago
  • $71.42 - $90.22 per hour

     ...infrastructure tooling, including Ansible, Chef, Terraform, Jenkins, Docker, Kubernetes Experience with big data technologies: Apache Hadoop, Hive, Spark ecosystem Familiarity with Google Cloud infrastructure and security Experience with Java and Spring Boot,... 
    Contract work
    Remote work

    Openkyber

    Wisconsin
    6 days ago
  •  ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis... 
    Contract work
    Work experience placement
    Local area
    3 days per week

    Openkyber

    Wisconsin
    8 days ago
  •  ...Microsoft Azure Automation & Configuration: Ansible, Chef Databases & Data Platforms: Oracle, SQL Server, PostgreSQL, Hadoop, Spark, NoSQL Security: Infrastructure security architecture, compliance, policies, and controls Resilience: Backup... 

    Openkyber

    Wisconsin
    2 days ago