Get new jobs by email
- ...implement end-to-end data solutions using Databricks Lakehouse Platform Design and develop scalable data pipelines using PySpark, Spark, and SQL Lead cloud data platform implementations on AWS/Azure/GCP Drive modernization of legacy data systems to cloud-native...SuggestedFull time
- ...the Databricks Lakehouse Platform building Bronze/Silver/Gold layers using Delta Lake and Delta Live Tables (DLT). Strong in PySpark, Spark SQL, and Databricks Workflows for orchestration. Proficient with Unity Catalog for governance, lineage, and access control, and...SuggestedRemote work
- ...experience with Databricks Lakehouse platform (Delta Lake, Unity Catalog, Workflows) Hands-on expertise in big data ecosystems (Spark, EMR, Kafka, Snowflake, MongoDB, RDBMS) Proven experience in large-scale data migration projects Knowledge of data architecture...Suggested
- ...engineering, with at least 2 3 years in Databricks administration. - Expert knowledge of Unity Catalog, cluster policies, Delta Lake, Spark, workspace configuration, and jobs. - Strong grounding in data governance, data modeling, ingestion frameworks, schema...Suggested
- ...years of experience building data pipelines in cloud environments ~4+ years of experience with Big Data technologies (e.g., Spark, Hadoop) and cloud architecture ~3+ years of experience with reporting and analytics tools (e.g., Tableau, Power BI) ~ Hands...SuggestedLong term contract
- ...Data bricks Lakehouse Platform - building Bronze/Silver/Gold layers using Delta Lake and Delta Live Tables (DLT). Strong in PySpark, Spark SQL, and Data bricks Workflows for orchestration. Proficient with Unity Catalog for governance, lineage, and access control, and...SuggestedRemote work
- ...Proficient SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting...SuggestedContract work
- ...Petabyte-level data systems. Experience with cloud-native data tools and architectures (e.g., Redshift, Glue, Airflow, Apache Spark). Proficient in automated testing frameworks (PyTest, Playwright or Jest) and testing best practices. Experience developing...SuggestedLong term contractRemote work
- ...Experience with cloud platforms and containerization (Docker, Kubernetes). ~ Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks. ~ Solid understanding of software engineering principles and DevOps practices. ~ Ability to...Suggested
- ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis Need strong experience...SuggestedContract workWork experience placementLocal area3 days per week
- ...and SQL ~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data...SuggestedRemote work
- ...preferred. Experience with Big Data technologies and developing in Hadoop ecosystem, i.e. Databricks, Hadoop, Hbase, Hive, Scala, SPARK, Sqoop, Flume, Kafka, Python Experience with Oracle and/or Postgres, NoSQL with Yugabyte, and/or Cassandra and/or Cosmos, a plus...SuggestedFull timeLocal areaWork from homeRelocation package
- ...Preferred qualifications: Experience with streaming API s like Kafka Understanding of Big Data/Data Lake technologies (Spark, Hadoop, Databricks etc) Understanding of Design patterns and clean coding Understanding of technical aspects of Analytic applications...SuggestedLocal area
- ...resume and contact details. Core Technical Skills: SQL Server SQL Server Integration Services Azure Synapse Spark Microsoft Fabric Required Skills & Experience: ~ Bachelor's or Master's degree in computer science, information...SuggestedContract workWork at officeRemote work
- ...and SQL ~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data...SuggestedRemote work
- ...equivalent): AWS or Azure Data Platform Services, Postgres/Oracle/DB2, Collibra, Databricks, Delta Lake, Python, Snowflake, ETL tools (Spark, etc.), CI/CD pipelines supporting Data Lakehouse Expertise in real time and batch data ingestion architectures (Kafka/Event...Hourly payContract work
- ...Cleansing, deduplication, parsing, and merging of high-volume datasets Parsing EBCDIC/COBOL-formatted VSAM files using Spark-Cobol Library Connecting to Db2 databases using JDBC drivers for ingestion For applications and inquiries, contact: hirings...Remote work
$69 - $74 per hour
...cloud computing experience. 3+ years of experience with Google BigQuery. Preferred Qualifications 5+ years of experience with Spark and Python. 3+ years of experience with secure DevOps practices and deployment automation in cloud environments. 3+ years of experience...Hourly pay- ...~7+ years of experience in Software Engineering ~4+ years building big data pipelines ~4+ years of experience with: Apache Spark (PySpark / Spark SQL) Hive and Iceberg tables SQL / SQL Server or other RDBMS ~ Strong programming experience in: Python PySpark...Contract workRemote work
- ...transformation, and data reliability Real-time systems Build streaming solutions using Kafka or Azure Event Hubs Use Spark Structured Streaming for high-volume data processing API and integration Lead integrations using Spring Boot, Dell Boomi,...
$75 - $80 per hour
...Qualifications Knowledge of MLOps practices and ML pipelines Experience with data platforms such as Snowflake, Databricks, or Spark Familiarity with AI frameworks like TensorFlow or PyTorch Cloud certifications such as AWS Certified Solutions Architect...Full timeContract workTemporary workWork experience placementImmediate startWorldwideFlexible hours- ...DevOps and CI/CD pipelines Expertise in Docker and Kubernetes Experience with Terraform (IaC tools) Knowledge of Apache Spark and Hadoop (preferred) Strong problem-solving and communication skills Preferred Qualifications Bachelor's or Master'...Contract work
- ...platform engineer to design and build a containerized API layer that abstracts and governs interactions with engines such as Apache Spark through a well-defined API contract. This role focuses on building platform capabilities, not simply consuming existing data tools enabling...Contract work
- ...related discipline Minimum of 5+ years of relevant industry experience Minimum 5 years of experience developing in Hadoop echo system ( Spark, PySpark, MapReduce, Hive, Impala ) Minimum 5 years of experience with common application frameworks (JEE Spring Boot, Struts,...Remote work
- ..., improving database performance, and developing quality audits. Proficient with the Azure stack, including Synapse Analytics, Spark/Python, Azure SQL, Azure Data Factory (ADF), Kusto, among others. Knowledgeable in ETL tools, particularly Azure Data Services,...Contract work
$76.87 - $81.87 per hour
...capabilities 5+ years of experience in data engineering including hands-on experience working with Cloud data solutions: creating/supporting Spark based ingestion and processing 3+ years of experience with Data lakehouse architecture and design, including hands-on experience...Hourly payContract workTemporary workWork experience placement- ...Responsibilities : Expertise in big data processing, Core Java and Apache spark particularly within finance domain. Should have a strong experience working with financial instruments, market risk and large-scale distributed computing systems. Develop...
- ...develop scalable data transformation pipelines using DBT Cloud Architect and implement Databricks-based data solutions (Delta Lake, Spark) Build and optimize data models (star/snowflake schemas) for analytics Develop ETL/ELT pipelines using modern data stack...
- ...generation, and intelligent automation. - Work with large structured and unstructured datasets using tools like Pandas, NumPy, and Spark. - Implement models in production environments using Python, TensorFlow, PyTorch, or scikit-learn. - Conduct exploratory...
- ...Google Cloud Platform) ~ Familiarity with REST APIs and microservices architecture ~ Experience with data processing tools (Spark, Pandas, etc.) Preferred Qualifications Experience working in data security / cybersecurity domain Knowledge of MLOps...Contract work