Average salary: $165,660 /yearly
More statsGet new jobs by email
- ...Experience in developing APIs on Google Cloud Platform/Azure/API Gateways ~ Experience with data processing technology (Apache Spark etc.) ~ Experience with data virtualization technology (Tibco DV, Dremio, etc.) ~ Understanding of Agile practices and ability...Suggested
- ...years of experience building data pipelines in cloud environments ~4+ years of experience with Big Data technologies (e.g., Spark, Hadoop) and cloud architecture ~3+ years of experience with reporting and analytics tools (e.g., Tableau, Power BI) ~ Hands...SuggestedLong term contract
$93.2k - $155.4k
...project experience developing and deploying solutions on the Databricks platform ~2+ years of experience in SQL, Python, and Apache Spark ~2+ years of experience with at least one major cloud platform (AWS, Azure, or GCP) and its surrounding data technologies ~...SuggestedLocal area- ...Proficient SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting...SuggestedContract work
- ...and SQL ~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data...SuggestedRemote work
- ...Experience with cloud platforms and containerization (Docker, Kubernetes). ~ Familiarity with data engineering tools (e.g., Airflow, Spark) and MLOps frameworks. ~ Solid understanding of software engineering principles and DevOps practices. ~ Ability to...SuggestedFull timePart timeInternshipSeasonal work
- ...techniques and bayesian methodologies for hypothesis testing. Prior experience using large datasets with distributed computing (e.g. spark, hadoop or other map-reduce tech) Experience working in a fast-moving tech company or startup. Wayve is committed to creating...SuggestedFull time
$75 - $80 per hour
...Qualifications Knowledge of MLOps practices and ML pipelines Experience with data platforms such as Snowflake, Databricks, or Spark Familiarity with AI frameworks like TensorFlow or PyTorch Cloud certifications such as AWS Certified Solutions Architect...SuggestedFull timeContract workTemporary workWork experience placementImmediate startWorldwideFlexible hours- ...break it down. Who You Are: Customer Experience Expert: Energize every customer interaction with a warm and helpful vibe, sparking conversation that inspires sales and builds brand love. Brand Ambassador: Stay connected to our newest campaigns and product launches...SuggestedTemporary workPart timeSeasonal workLocal areaFlexible hoursNight shift
- ...Cleansing, deduplication, parsing, and merging of high-volume datasets Parsing EBCDIC/COBOL-formatted VSAM files using Spark-Cobol Library Connecting to Db2 databases using JDBC drivers for ingestion For applications and inquiries, contact: hirings...SuggestedRemote work
- ...and SQL ~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data...SuggestedRemote work
$200k - $255k
...Comfortable working with: SQL (strong) Python (working familiarity) Experience with modern data stacks: Databricks / Spark / DBT / Postgres Airflow or similar orchestration tools ➕ Bonus Points Healthcare / regulated environments Voice AI / conversational...SuggestedPermanent employmentRemote work- ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis Need strong...SuggestedContract workWork experience placementLocal area3 days per week
- ...platform engineer to design and build a containerized API layer that abstracts and governs interactions with engines such as Apache Spark through a well-defined API contract. This role focuses on building platform capabilities, not simply consuming existing data tools enabling...SuggestedContract work
- ...Google Cloud Platform) ~ Familiarity with REST APIs and microservices architecture ~ Experience with data processing tools (Spark, Pandas, etc.) Preferred Qualifications Experience working in data security / cybersecurity domain Knowledge of MLOps...SuggestedContract work
- ...develop scalable data transformation pipelines using DBT Cloud Architect and implement Databricks-based data solutions (Delta Lake, Spark) Build and optimize data models (star/snowflake schemas) for analytics Develop ETL/ELT pipelines using modern data stack...
- ...Familiarity with causal inference methods and experiment design Familiarity with building and maintaining data pipelines (Airflow, dbt, Spark, or similar) Practical experience building or deploying AI agents or LLM-powered applications Familiarity with knowledge graph...Full time
- ...ingestion, APIs, and serverless/data processing services (e.g., Lambda, Glue). Experience working with Databricks and Apache Spark for scalable data transformation, enrichment, and analytics workflows. Proficiency in Python, Java, or Scala, and strong SQL skills...Full timeRemote work
- ...Petabyte-level data systems. Experience with cloud-native data tools and architectures (e.g., Redshift, Glue, Airflow, Apache Spark). Proficient in automated testing frameworks (PyTest, Playwright or Jest) and testing best practices. Experience developing...Long term contractRemote work
- ...equivalent): AWS or Azure Data Platform Services, Postgres/Oracle/DB2, Collibra, Databricks, Delta Lake, Python, Snowflake, ETL tools (Spark, etc.), CI/CD pipelines supporting Data Lakehouse Expertise in real time and batch data ingestion architectures (Kafka/Event...Hourly payContract work
- ...resume and contact details. Core Technical Skills: SQL Server SQL Server Integration Services Azure Synapse Spark Microsoft Fabric Required Skills & Experience: ~ Bachelor's or Master's degree in computer science, information...Contract workWork at officeRemote work
- ...Preferred qualifications Experience with streaming API s like Kafka Understanding of Big Data/Data Lake technologies (Spark, Hadoop, Databricks etc) Understanding of Design patterns and clean coding Understanding of technical aspects of Analytic applications...Local area
- ...ingestion, transformation, and data reliability Real-time systems Build streaming solutions using Kafka or Azure Event Hubs, Use Spark Structured Streaming for high-volume data processing API and integration Lead integrations using Spring Boot, Dell Boomi, and...
- ...generation, and intelligent automation. - Work with large structured and unstructured datasets using tools like Pandas, NumPy, and Spark. - Implement models in production environments using Python, TensorFlow, PyTorch, or scikit-learn. - Conduct exploratory...
$18 - $20 per hour
...traffic retail and event locations (specific venues discussed during the interview process) Engage directly with potential customers to spark interest and set appointments for follow-up with the sales team Conduct short, engaging product demonstrations to educate on...Permanent employmentFull timePart timeLocal areaFlexible hoursShift work- ...Responsibilities : Expertise in big data processing, Core Java and Apache spark particularly within finance domain. Should have a strong experience working with financial instruments, market risk and large-scale distributed computing systems. Develop...
- ..., improving database performance, and developing quality audits. Proficient with the Azure stack, including Synapse Analytics, Spark/Python, Azure SQL, Azure Data Factory (ADF), Kusto, among others. Knowledgeable in ETL tools, particularly Azure Data Services,...Contract work
- ...related discipline Minimum of 5+ years of relevant industry experience Minimum 5 years of experience developing in Hadoop echo system ( Spark, PySpark, MapReduce, Hive, Impala ) Minimum 5 years of experience with common application frameworks (JEE Spring Boot, Struts,...Remote work
- ...~7+ years of experience in Software Engineering ~4+ years building big data pipelines ~4+ years of experience with: Apache Spark (PySpark / Spark SQL), Hive and Iceberg tables, SQL / SQL Server or other RDBMS ~ Strong programming experience in: Python, PySpark...Contract workRemote work
- ...scikit-learn, NLTK, Azure ML (optional), Amazon Web Services EC2. Experience with scalable data engineering frameworks such as Apache Spark and orchestration frameworks such as Airflow, and/or experience with semantic search. Expert knowledge in conducting data analysis...Contract workWork experience placement