Get new jobs by email
- Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API, Handling, API Development, Data Migration, Batch Data Pipelines) in Charlotte, NC API Development, AWS, AWS Lambda, Data Migration, Hadoop, Java, Oracle, Snowflake, SQL Server...SuggestedFull timeLocal areaImmediate startRemote work
- ...support, automation, anomaly detection, and pattern recognition. · Select and implement appropriate modeling techniques using Python, Spark, or cloud-native ML frameworks (e.g., SageMaker, MLflow). · Maintain reproducibility and interpretability of model outputs to meet...SuggestedFull time
- ...engineering for analytics or ML systems. Strong SQL proficiency. Experience in Python, Scala, or Java. Hands-on experience with Spark, Kafka, and Airflow (or similar). Strong understanding of data modeling and lakehouse architectures (e.g., Iceberg)....Suggested
- ...Strong Python skills (NumPy, Pandas, Scikitlearn, PyTorch or TensorFlow). Experience with SQL and dataengineering workflows (Spark or Airflow). Practical experience with LLMs, vector databases, embeddings, and RAG systems. Familiarity with production ML...Suggested
- ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis...SuggestedContract workWork experience placementLocal area3 days per week
- ...open table formats (e.g., Apache Iceberg). Develop batch and streaming pipelines using: AWS services such as AWS Glue, Apache Spark, Kinesis, and Athena GCP services such as Dataproc, Dataflow, Pub/Sub, and BigQuery Implement medallion architecture patterns...SuggestedFull timePart timeFlexible hours
- ...cleansing raw data into ML-ready datasets. Implement data transformation logic and algorithms using tools such as PySpark, Apache Spark, or similar frameworks to pre-process and clean data. Utilize cloud-based data warehouse solutions such as Amazon Redshift to...SuggestedFull timePart time
- ...Weaviate, Milvus). • Cloud platforms (AWS, Azure, GCP) and Kubernetes orchestration at enterprise scale. • Data engineering tools (Spark, Airflow, Kafka, Databricks, Snowflake). • Security, compliance, and data governance frameworks (RBAC, encryption, audit...SuggestedWork at officeRemote workFlexible hours
$75 per hour
...time-series datasets in distributed environments. Implement performance optimization strategies including partitioning, caching, Spark tuning, and file size optimization. Integrate AWS services (S3, IAM, Glue) with Databricks-based data and ML pipelines....SuggestedContract work- ...compliance validation, infrastructure cost analysis and optimization. Knowledge of analytics and Applied ML stack like Apache Spark, Trino/Pinot, Iceberg, Atlas, Flink, Airflow/Luigi, Tableau, Snowflake, Databricks, MLFlow, Data Catalogs, Jupiter Notebooks, Vector...Suggested
- ...learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. Experience with data processing tools like Pandas, NumPy, and Spark. Knowledge of ML model lifecycle and MLOps . Experience with REST APIs and microservices . Familiarity with cloud...Suggested
- ...prune the images in private registries. You have hands-on experience on access control in K8S cluster. You have hands-on experience on SPARK and maintaining SPARK CLUSTER. You monitor and troubleshoot issues related to Kubernetes clusters and containerized applications....Suggested
- ...warehouse and lakehouse architectures Build robust ETL/ELT pipelines using tools such as Azure Data Factory, DBT, Databricks, and Spark Develop and optimize complex SQL queries for high-performance data processing Implement data ingestion frameworks for...SuggestedWork at officeRemote workFlexible hours
- ...solutions. Design and implement data ingestion pipelines from IoT devices and external data sources into big data ecosystems (Hadoop, Spark, Kafka, AWS/Google Cloud Platform/Azure). Ensure real-time data processing, event streaming, and batch processing for high-...SuggestedRemote work
$160k - $190k
...University's 44-acre downtown campuswith its hundreds of educational and community programsis one of the key urban institutions that will spark our city's revitalization. It is no exaggeration to say that our city's future and vitality depends on PSU students and alumni, more...SuggestedFull timeWork at officeRemote workMonday to Friday- ...Kubernetes, APIM, Log Analytics) Cosmos DB (SQL/NoSQL), Synapse Kafka RDBMS & SQL, NoSQL concepts CI/CD pipelines, build & deployment tools Spark jobs setup/debugging; data processing & integrity Exposure to Hadoop, Spark Streaming, ETL frameworks Python & Shell scripting for...Contract work
- ...Experience with Spring Boot Projects, Spring Data, Spring Batch, Spring Security frameworks. Good to have knowledge of Apache Kafka, Apache Spark and ActiveMQ broker. Experience in Database Design in Oracle and SQL server. Experience or Knowledge on creating CI/CD pipelines...
- ..., and in coordination with C-suite leadership, this role demands a professional who excels in nurturing influencer relationships, sparking engagement through webinars, blogs, social media, and digital as well as in-person events. Key Responsibilities: Develop and...Full timeFlexible hours
- ...AMERICA : USA : NV-Las Vegas || NORTH AMERICA : USA : NV-North Las Vegas || NORTH AMERICA : USA : NV-Reno || NORTH AMERICA : USA : NV-Sparks || NORTH AMERICA : USA : OH-Akron || NORTH AMERICA : USA : OH-Bowling Green || NORTH AMERICA : USA : OH-Canton || NORTH AMERICA :...Full timeCasual workLocal areaRemote work
- ...Warehouses, OneLake, and Semantic Models Define ingestion and transformation patterns using Fabric Data Pipelines, Dataflows Gen2, and Spark Architect analytics and reporting solutions with Power BI and Fabric Semantic Models Establish best practices for...Contract workRemote work
- ...Qualifications ~7+ years of experience in data engineering, with deep hands-on Databricks experience ~ Strong expertise in: Apache Spark (PySpark / Scala) ~ Delta Lake ~ Databricks Workflows & Jobs ~ Experience preparing data platforms for AI/ML ingestion...
- ...engineering teams on ways to build pipelines and optimize existing ones Usage monitoring and troubleshooting bottlenecks with spark jobs, ETL pipelines and ML workloads Scripting experience in Python and SQL Data sharing knowledge and implementing guardrails...
- .... Experience working towards design, architecture, development, and operationalization of data flows across Hadoop eco-system, Spark (Databricks or otherwise), Snowflake and Cloud platform(s) Understanding of applied Machine Learning (End-to-End) Lifecycle and...
- ...product data sheets, white papers, solution briefs, competitive briefs and battle cards, presentations, videos, and web content. Spark growth through market analysis and insights. You'll conduct thorough market analysis to guide product development and marketing strategies...Full timeWork experience placementFlexible hours
- ...formats (Delta/Parquet), partitioning, and performance tuning in Fabric. Develop robust data transformation logic using: PySpark / Spark SQL, SQL-based transformations. Perform data cleansing, standardization, enrichment, and deduplication across multiple...
$90k - $110k
...all feel welcomed, valued and empowered to achieve our full potential is important to who we are today and where we're headed in the future. And we know that unique backgrounds, experiences and perspectives help us think bigger, spark innovation and succeed together....Full timeRemote work- ...only only locals In-person Client interview required JD Skills: Python - 1st priority PySpark - 1st priority Java spark 2nd Priority data engineer with developing pipelines using Python, Pyspark, JavaSPark on AWS environment Should know how...Local area
- ...Looker) and self-service analytics. Exposure to DevOps practices for data pipelines. Familiarity with big data frameworks (Spark, Hadoop) and AI/ML integration. Previous experience in cross-functional, global teams. Education: Bachelor s degree in...
- A leading technology company located in Indianapolis is seeking an experienced engineer to architect and operationalize Spark at scale for its data fabric. You will build resilient infrastructure and establish patterns for high availability. The ideal candidate has extensive...
- .../Solr, Cassandra, or other NoSQL storage systems. · Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. · Showcase professional...Work at officeRemote workFlexible hours