Get new jobs by email
  • Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API, Handling, API Development, Data Migration, Batch Data Pipelines) in Charlotte, NC API Development, AWS, AWS Lambda, Data Migration, Hadoop, Java, Oracle, Snowflake, SQL Server... 
    Suggested
    Full time
    Local area
    Immediate start
    Remote work

    DBA Web Technologies

    Indiana
    more than 2 months ago
  • $96k - $129k

     ...: Your Opportunity Are you a talented copywriter who can spark brilliant ideas, but also mine technical detail for the golden thread of a compelling message? Someone who can be serious about technology—but not always? Then this may be the role for you. The ideal... 
    Suggested
    Full time
    Work experience placement
    H1b
    Work at office
    Immediate start
    Remote work
    Flexible hours

    New Relic

    Indiana
    2 days ago
  •  ...years of experience building data pipelines in cloud environments ~4+ years of experience with Big Data technologies (e.g., Spark, Hadoop) and cloud architecture ~3+ years of experience with reporting and analytics tools (e.g., Tableau, Power BI) ~ Hands... 
    Suggested
    Long term contract

    Tech3pillars Technologies

    Indiana
    3 days ago
  • $75k - $85k

     ...does not offer visa or work sponsorship. Applicants must be authorized to work in the U.S. without current or future sponsorship. Did we spark your interest Then please click apply above to access our guided application process. Required Experience: IC... 
    Suggested
    Full time
    Work at office
    Worldwide
    Flexible hours

    GEA

    Indiana
    8 days ago
  •  ...Petabyte-level data systems. Experience with cloud-native data tools and architectures (e.g., Redshift, Glue, Airflow, Apache Spark). Proficient in automated testing frameworks (PyTest, Playwright or Jest) and testing best practices. Experience developing... 
    Suggested
    Long term contract
    Remote work

    Openkyber

    Indiana
    1 day ago
  •  ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis Need strong experience... 
    Suggested
    Contract work
    Work experience placement
    Local area
    3 days per week

    Openkyber

    Indiana
    2 days ago
  •  ...platform engineer to design and build a containerized API layer that abstracts and governs interactions with engines such as Apache Spark through a well-defined API contract. This role focuses on building platform capabilities, not simply consuming existing data tools enabling... 
    Suggested
    Contract work

    Openkyber

    Indiana
    3 days ago
  •  ...and SQL ~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data... 
    Suggested
    Remote work

    Openkyber

    Indiana
    3 days ago
  •  ...TensorFlow and PyTorch, focusing on regression, classification, clustering, and recommendation systems. Handle large-scale datasets with Spark, applying distributed computing and parallel processing for efficient storage, processing, and analysis. Develop generative AI... 
    Suggested
    Work at office
    Remote work

    Orison Solutions

    Indiana
    18 days ago
  •  ...resume and contact details. Core Technical Skills: SQL Server SQL Server Integration Services Azure Synapse Spark Microsoft Fabric Required Skills & Experience: ~ Bachelor's or Master's degree in computer science, information... 
    Suggested
    Contract work
    Work at office
    Remote work

    Openkyber

    Indiana
    1 day ago
  •  ...: Our Journey ShopBack started as a spark of inspiration one night in 2014 when Henry and Joel were brainstorming ideas in Henry's car. That lightbulb moment — earning Cashback while shopping online — was just the beginning. Fueled by the countless possibilities, the... 
    Suggested
    Full time
    Local area
    Remote work
    Night shift

    ShopBack

    Indiana
    1 day ago
  •  ...Mobile etc) Preferred qualifications Experience with streaming API s like Kafka Understanding of Big Data/Data Lake technologies (Spark, Hadoop, Databricks etc) Understanding of Design patterns and clean coding Understanding of technical aspects of Analytic... 
    Suggested
    Local area

    Openkyber

    Indiana
    4 days ago
  •  ...and SQL ~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data... 
    Suggested
    Remote work

    Openkyber

    Indiana
    1 day ago
  •  ...equivalent): AWS or Azure Data Platform Services, Postgres/Oracle/DB2, Collibra, Databricks, Delta Lake, Python, Snowflake, ETL tools (Spark, etc.), CI/CD pipelines supporting Data Lakehouse Expertise in real time and batch data ingestion architectures (Kafka/Event... 
    Suggested
    Hourly pay
    Contract work

    Openkyber

    Indiana
    1 day ago
  •  ...ingestion, APIs, and serverless/data processing services (e.g., Lambda, Glue). Experience working with Databricks and Apache Spark for scalable data transformation, enrichment, and analytics workflows. Proficiency in Python, Java, or Scala, and strong SQL skills... 
    Suggested
    Full time
    Remote work

    Rishabh RPO

    Indiana
    1 day ago
  • $75 - $80 per hour

     ...Qualifications Knowledge of MLOps practices and ML pipelines Experience with data platforms such as Snowflake, Databricks, or Spark Familiarity with AI frameworks like TensorFlow or PyTorch Cloud certifications such as AWS Certified Solutions Architect... 
    Full time
    Contract work
    Temporary work
    Work experience placement
    Immediate start
    Worldwide
    Flexible hours

    Openkyber

    Indiana
    1 day ago
  •  ...Required Skills: Strong experience in Python, SQL, PySpark Hands-on experience with AWS services: Experience with Apache Spark, Airflow, Kafka Knowledge of Data Warehousing concepts Experience with CI/CD and DevOps tools Strong understanding of... 
    Local area
    2 days per week

    Openkyber

    Indiana
    3 days ago
  •  ...Proficient SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting... 
    Contract work

    Openkyber

    Indiana
    9 hours agonew
  •  ...transformation, and data reliability Real-time systems Build streaming solutions using Kafka or Azure Event Hubs Use Spark Structured Streaming for high-volume data processing API and integration Lead integrations using Spring Boot, Dell Boomi,... 

    Openkyber

    Indiana
    5 days ago
  •  ...Responsibilities : Expertise in big data processing, Core Java and Apache spark particularly within finance domain. Should have a strong experience working with financial instruments, market risk and large-scale distributed computing systems. Develop... 

    Openkyber

    Indiana
    5 days ago
  •  ...Google Cloud Platform) ~ Familiarity with REST APIs and microservices architecture ~ Experience with data processing tools (Spark, Pandas, etc.) Preferred Qualifications Experience working in data security / cybersecurity domain Knowledge of MLOps... 
    Contract work

    Openkyber

    Indiana
    5 days ago
  •  ...Cleansing, deduplication, parsing, and merging of high-volume datasets Parsing EBCDIC/COBOL-formatted VSAM files using Spark-Cobol Library Connecting to Db2 databases using JDBC drivers for ingestion For applications and inquiries, contact: hirings... 
    Remote work

    Openkyber

    Indiana
    1 day ago
  •  ...related discipline Minimum of 5+ years of relevant industry experience Minimum 5 years of experience developing in Hadoop echo system ( Spark, PySpark, MapReduce, Hive, Impala ) Minimum 5 years of experience with common application frameworks (JEE Spring Boot, Struts,... 
    Remote work

    Openkyber

    Indiana
    4 days ago
  •  ...~7+ years of experience in Software Engineering ~4+ years building big data pipelines ~4+ years of experience with: Apache Spark (PySpark / Spark SQL) Hive and Iceberg tables SQL / SQL Server or other RDBMS ~ Strong programming experience in: Python PySpark... 
    Contract work
    Remote work

    Openkyber

    Indiana
    5 days ago
  •  ..., improving database performance, and developing quality audits. Proficient with the Azure stack, including Synapse Analytics, Spark/Python, Azure SQL, Azure Data Factory (ADF), Kusto, among others. Knowledgeable in ETL tools, particularly Azure Data Services,... 
    Contract work

    Openkyber

    Indiana
    4 days ago
  •  ...develop scalable data transformation pipelines using DBT Cloud Architect and implement Databricks-based data solutions (Delta Lake, Spark) Build and optimize data models (star/snowflake schemas) for analytics Develop ETL/ELT pipelines using modern data stack... 

    Openkyber

    Indiana
    5 days ago
  •  ...ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase... 
    Work at office
    Remote work
    Flexible hours

    NTT DATA, Inc.

    Indiana
    4 days ago
  •  ...generation, and intelligent automation. - Work with large structured and unstructured datasets using tools like Pandas, NumPy, and Spark. - Implement models in production environments using Python, TensorFlow, PyTorch, or scikit-learn. - Conduct exploratory... 

    Openkyber

    Indiana
    5 days ago
  •  ...Years of Exp.: 8+ years exp. Skills: 8+ years of ai/ml, ml algorithms, big data, sql, testing, tuning, python, java or r, ml framework; spark, matlab, data bricks, tensorflow, or scikit-learn, chatbots, nlp, image/data classification, sentiment analysis, regression analysis... 
    Local area
    Remote work

    Openkyber

    Indiana
    8 days ago
  •  ...specifically AWS. Preferred qualifications: Experience with MLOps tools (e.g., MLflow, Kubeflow). Knowledge of big data technologies (Spark, Hadoop, Databricks). Background in NLP, computer vision, or other advanced AI techniques. Relevant certifications (Coursera, edX,... 

    Openkyber

    Indiana
    8 days ago