Average salary: $145,318 /yearly

More stats
Get new jobs by email
  •  ...following technologies: Extraction & Logic Kafka, ANSI SQL, FTP, Apache Spark Data Formats JSON, Avro, Parquet Platforms Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ Core Competencies: Demonstrates strong integrity and consistently models... 
    Suggested

    NTT DATA, Inc.

    Indiana
    27 days ago
  •  ...inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Python AND Kafka AND (Hadoop OR HDFS OR Hive) AND Snowflake AND apache AND (iceberg to join our team in Bangalore, Karnātaka (IN-KA), India (IN). "**Basic... 
    Suggested
    Work at office
    Remote work
    Flexible hours

    NTT DATA, Inc.

    Indiana
    27 days ago
  •  ...inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Python AND Kafka AND (Hadoop OR HDFS OR Hive) AND Snowflake AND apache AND (iceberg to join our team in Bangalore, Karnātaka (IN-KA), India (IN).... 
    Suggested
    Work at office
    Remote work
    Flexible hours

    NTT DATA, Inc.

    Indiana
    2 days ago
  • SIX Software Engineers ( Java, Spring, Spring boot, Apache NiFi, Maven, Hadoop, MongoDB, MS Access, Accumulo) in Baltimore, MD Accumulo, Apache Nifi, Hadoop, Java, Maven, MS Access, Spring Boot, Spring Frameworks Location: Maryland Job Function: Software Development... 
    Suggested
    Full time
    Remote work

    DBA Web Technologies

    Indiana
    more than 2 months ago
  • Senior Data Quality Assurance Engineer (Informatica, Tableau, QlikView, Quality Center, Data Strategies, Data Delivery, Hadoop) in Charlotte, NC Data Delivery, data quality, Data Quality Assurance Engineer, Data Strategies, Hadoop, Informatica, Jenkins, QlikView, Quality... 
    Suggested
    Permanent employment
    Full time
    Work experience placement
    Remote work

    DBA Web Technologies

    Indiana
    more than 2 months ago
  • Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API, Handling, API Development, Data Migration, Batch Data Pipelines) in Charlotte, NC API Development, AWS, AWS Lambda, Data Migration, Hadoop, Java, Oracle, Snowflake, SQL Server... 
    Suggested
    Full time
    Local area
    Immediate start
    Remote work

    DBA Web Technologies

    Indiana
    more than 2 months ago
  • Sr Developer (Machine Learning, Distributed Ledger Technology, NLP, Image Analytics, Python, R, Hadoop, AWS) in McLean, VA AWS, Distributed Ledger Technology, Hadoop, Image Analytics, Java, Machine Learning, Natural Language Processing, Python, R Location: Virginia... 
    Suggested
    Permanent employment
    Full time
    Remote work
    Relocation

    DBA Web Technologies

    Indiana
    more than 2 months ago
  • Data Warehouse Architect (Hadoop, Erwin, Oracle, Cognos, Tableau, R, Python) in Pittsburgh or Cleveland Business Objects, Cloudera, Cognos, Data Warehouse Architecture, Hadoop, OBIEE, Python, R, Tableau Location: Pennsylvania Job Function: Data Warehouse Architect... 
    Suggested
    Permanent employment
    Full time
    Work experience placement
    Remote work
    Flexible hours

    DBA Web Technologies

    Indiana
    more than 2 months ago
  •  ...information retrieval concepts, relevance metrics, and evaluation methods. Familiarity with large-scale data processing (e.g., Spark, Hadoop) is a plus. Key Responsibilities: Design, develop, and implement search ranking models using Learn to Rank approaches.... 
    Suggested
    Immediate start

    Openkyber

    Indiana
    3 days ago
  •  ...monitoring, performance tuning, and capacity planning. Distributed Systems: Strong hands-on experience with Spark, Flink, and Kafka . Hadoop Ecosystem: Proficiency in Hadoop Cluster Administration and Operations. Cloud & Containers: Deep understanding of AWS and... 
    Suggested
    Work at office
    Home office

    Openkyber

    Indiana
    2 days ago
  •  ...experience with ETL/ELT tools (e.g., Informatica, Talend, Apache NiFi). Experience with big data technologies (e.g., Apache Spark, Hadoop ecosystem). Proficiency in data warehousing concepts (e.g., Snowflake, Redshift, BigQuery). Experience with cloud... 
    Suggested

    Openkyber

    Indiana
    2 days ago
  •  ...Knowledge of Java is a plus for developing custom NiFi processors. Big Data and Streaming Technologies: Experience with Kafka, Hadoop, Spark, or other big data ecosystems is beneficial. Understanding of messaging systems and streaming data pipelines.... 
    Suggested

    Openkyber

    Indiana
    1 day ago
  •  ...Role Overview We are looking for a highly experienced Senior Data Engineer with strong hands‑on expertise in PySpark, Python, Hadoop, ETL, RDBMS, and Unix as primary skills. The role also requires exposure to GCP Vertex AI and Agentic AI as secondary skills,... 
    Suggested
    Work at office
    Remote work
    Flexible hours

    NTT DATA, Inc.

    Indiana
    18 days ago
  •  ...Location: Chicago, IL, Hybrid Pay: Available on W2 Basis Position Overview We are seeking a hands-on AWS Big Data Architect with strong Hadoop and Spark experience to design, develop, and support scalable Big Data Warehouse and Data Lake solutions. This role requires a... 
    Suggested
    Permanent employment
    Contract work
    Temporary work

    Openkyber

    Indiana
    2 days ago
  •  ...Extensive working experience in implementing scalable and efficient data processing pipelines using big data technologies, such as Hadoop/EMR, Spark, Hive. Strong development experience in AWS platforms/services such as Lambda, Eventbridge, Step Functions, Redshift, S3... 
    Suggested
    Work experience placement
    H1b
    Local area

    Openkyber

    Indiana
    6 hours agonew
  •  ...programming and 3+ years of MLOps experience in production environments. ~5+ years with Big Data platforms such as BigQuery or Hadoop and 3+ years with PySpark. ~2+ years building APIs, preferably with FastAPI, and integrating with Google Cloud Platform/Azure or... 
    Internship
    3 days per week

    Openkyber

    Indiana
    2 days ago
  •  ...experimental design (A/B testing) Strong data engineering capabilities including SQL/NoSQL database programming, distributed computing tools (Hadoop, Spark, Kafka), data pipeline development, and experience with cloud platforms (AWS, Azure, Google Cloud Platform) Production ML... 

    Openkyber

    Indiana
    3 days ago
  •  ...Experience with cloud platforms (AWS, Azure, or Google Cloud Platform). Knowledge of ETL/ELT pipelines and big data technologies (Spark, Hadoop). Experience with AI/ML frameworks (TensorFlow, PyTorch, or similar). Strong programming skills (Python, SQL, or Scala). Experience... 
    Full time
    Remote work

    Openkyber

    Indiana
    3 days ago
  •  ...Cloud. Familiarity with version control (Git) and CI/CD for data pipelines. Familiarity with Big Data technologies such as Hadoop, Spark, and Kafka is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration... 
    Contract work
    Work at office
    Local area
    1 day per week

    Openkyber

    Indiana
    4 days ago
  •  ...Data Engineer (pyspark, Hadoop, Scala) - Walmart - Sunnyvale, CA (hybrid ) Skill Set: Can perform Pyspark, Hadoop, Scala, ETL, Day to Day: Working on Walmart signals and table. Preparing and processing raw data from users and creating tables for Walmart.... 

    Openkyber

    Indiana
    4 days ago
  •  ...Tech stack. Experienced with Infrastructure as Code (IaC). Experience with big data technologies such as Apache Spark or Hadoop. Stay informed about the ethical implications of machine learning eg: selection bias. Model Training Data Analytics... 
    Contract work

    Openkyber

    Indiana
    2 days ago
  •  ...drift monitoring. Use Azure Data Factory (ADF) and Azure Databricks for orchestrated, scalable data processing; use AWS EMR for Hadoop/Spark workloads supporting AI features. Build Agentic AI Solutions Design secure tool-calling and multi-agent orchestration... 
    Local area

    Openkyber

    Indiana
    4 days ago
  •  ...Experience with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets.... 
    Local area
    Remote work

    Openkyber

    Indiana
    4 days ago
  •  ...Develop and optimize ETL/ELT processes, data flows, and infrastructure leveraging AWS, SQL, and platforms such as Databricks, Redshift, Hadoop and Airflow. Assemble and manage large, complex datasets to meet functional and non-functional business requirements.... 
    Long term contract
    Remote work

    Openkyber

    Indiana
    5 days ago
  •  ...with Google Cloud Platform (Google Cloud Platform) , especially BigQuery Prior experience with Teradata Familiarity with Hadoop ecosystem Exposure to tools such as Dremio and distributed storage systems Cloud certifications (Google Cloud Platform preferred... 
    Contract work
    Visa sponsorship

    Openkyber

    Indiana
    6 days ago
  • $79 - $85 per hour

     ...Agentic AI frameworks: LangGraph, LangChain, A2A Programming Languages: Java, Scala, SQL, HiveQL Big Data Technologies: Hadoop, Spark, HDFS, Hive, Cloudera, Hortonworks Cloud Platforms: AWS (Glue, Lambda, Redshift, S3, CloudWatch) ETL / ELT Tools: AWS... 
    Hourly pay
    Full time
    Contract work
    Work at office
    3 days per week

    Openkyber

    Indiana
    1 day ago
  •  ...platforms. Ideally will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc. Responsibilities : Design, develop, and maintain large-scale data processing pipelines using Big Data technologies... 
    Contract work
    Work experience placement
    Work at office
    Local area

    Openkyber

    Indiana
    1 day ago
  •  ...and container tools (Docker, Kubernetes). ~ Understanding of data engineering, ETL pipelines, and big data technologies (Spark, Hadoop, Kafka). ~ Proficiency with software engineering best practices (CI/CD, Git, testing, modular design). ~ Strong communication and... 

    NTT DATA, Inc.

    Indiana
    18 days ago
  •  ...technically related discipline Minimum of 5+ years of relevant industry experience Minimum 5 years of experience developing in Hadoop echo system ( Spark, PySpark, MapReduce, Hive, Impala ) Minimum 5 years of experience with common application frameworks (JEE... 
    Contract work
    Remote work

    Openkyber

    Indiana
    6 hours agonew
  •  ..., and production issues Required Skills: Strong experience with PySpark, Java, and Python Hands-on experience with Hadoop ecosystem (Hive, Spark, Impala, Kafka, Sqoop, Oozie, Yarn) Solid ETL and data engineering experience Expertise in SQL Server... 
    Remote work

    Openkyber

    Indiana
    1 day ago