Average salary: $130,000 /yearly
More statsGet new jobs by email
- ...Preferred qualifications: Experience with MLOps tools (e.g., MLflow, Kubeflow). Knowledge of big data technologies (Spark, Hadoop, Databricks). Background in NLP, computer vision, or other advanced AI techniques. Relevant certifications (Coursera, edX, AWS, Azure, Google...Suggested
- ...of experience building data pipelines in cloud environments ~4+ years of experience with Big Data technologies (e.g., Spark, Hadoop) and cloud architecture ~3+ years of experience with reporting and analytics tools (e.g., Tableau, Power BI) ~ Hands-on...SuggestedLong term contract
- ...solutions in virtualized environments. ~ Experience in one or more of the following: Networking, Devops, Security, Compute, Storage, Hadoop, Kubernetes, or SRE. ~ Networking - Experience delivering end-to-end networking designs and architectures that include DNS,...SuggestedContract workTraineeshipRemote work
- ...Must be Local to Reston, NO RELO - OnSite 3 days a week. Top 5 Technical Skills: Python (Big Data Pipeline) AWS Hadoop, Spark, Hive EMR Terraform Job Description: Strong Python development to build a big-data pipeline for data processing and analysis Need...SuggestedContract workWork experience placementLocal area3 days per week
- ...Airflow) Knowledge of real-time inference and batch processing systems Familiarity with big data technologies (Spark, Kafka, Hadoop) Experience with containerization (Docker, Kubernetes) Background in applied AI domains (NLP, computer vision,...SuggestedRemote work
- ...Preferred qualifications Experience with streaming API s like Kafka Understanding of Big Data/Data Lake technologies (Spark, Hadoop, Databricks etc) Understanding of Design patterns and clean coding Understanding of technical aspects of Analytic...SuggestedLocal area
- ...~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data pipeline...SuggestedRemote work
- ...~ Hands-on experience with AWS services (S3, Glue, Redshift, Lambda, EMR) ~ Experience with big data tools like Apache Spark, Hadoop, or Kafka ~ Knowledge of Machine Learning concepts and model lifecycle ~ Experience with ETL/ELT frameworks and data pipeline...SuggestedRemote work
- ...SQL; relational (PostgreSQL, MySQL) and NoSQL (MongoDB) experience Cloud & Big Data: AWS/Azure/Google Cloud Platform, Spark, Hadoop, scalable storage (S3, Blob, HDFS) Ready to Apply? Take the next step in your data engineering career with this exciting opportunity...SuggestedContract work
- ...or technically related discipline Minimum of 5+ years of relevant industry experience Minimum 5 years of experience developing in Hadoop echo system ( Spark, PySpark, MapReduce, Hive, Impala ) Minimum 5 years of experience with common application frameworks (JEE Spring...SuggestedRemote work
- ...Develop and optimize ETL/ELT processes, data flows, and infrastructure leveraging AWS, SQL, and platforms such as Databricks, Redshift, Hadoop and Airflow. Assemble and manage large, complex datasets to meet functional and non-functional business requirements....SuggestedLong term contractRemote work
- ...AWS. Preferred qualifications: Experience with MLOps tools (e.g., MLflow, Kubeflow). Knowledge of big data technologies (Spark, Hadoop, Databricks). Background in NLP, computer vision, or other advanced AI techniques. Relevant certifications (Coursera, edX, AWS,...Suggested
- ...Familiarity with market risk concepts including VaR, Greeks, scenario analysis and stress testing. ~ Hands on experience with Hadoop, Spark. ~ Proficiency on Git, Jenkins and CI/CD pipelines. ~ Excellent problem-solving skills and strong mathematical and analytical...Suggested
- ...amounts of real-world data. Experience retrieving and manipulating data from a variety of data sources included DB2, Oracle, SQL Server, Hadoop and flat files. Experience with database management systems (e.g., PostgresSQL, MySQL, SQLite, SQL, etc.) Excellent analytical...SuggestedContract workWork experience placement
- ...Senior Software Engineer experience in the Hadoop ecosystem (Spark, PySpark, MapReduce, Hive, Impala). Strong background in application frameworks such as JEE, Spring Boot, Struts, and Hibernate. Proficiency with relational databases (MS SQL Server preferred, Oracle,...Suggested
- ...platforms. The ideal candidate will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc Responsibilities: Design, develop, and maintain large-scale data processing pipelines using Big Data technologies...Work experience placementLocal area3 days per week
- ...and maintain data processing scripts using Python and PySpark. Integrate DataStage workflows with big data environments such as Hadoop/Spark ecosystems. Work with large datasets from multiple banking systems including transactional and regulatory data....
- ...Develop best possible, most robust, and extensible solutions from feature requests Work with big data technology (Kafka, Hadoop, Spark, etc) Work with Data Scientists to develop rich value-added features Work with DBA to create ETL and Data Warehouse...Remote work
- ...technically related discipline Minimum of 5+ years of relevant industry experience Minimum 5 years of experience developing in the Hadoop ecosystem (Spark, PySpark, MapReduce, Hive, Impala ) Minimum 5 years of experience with common application frameworks (JEE...Remote work
- ...Experience with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets....Remote work
- ...Spring Batch Strong experience with Google Cloud Platform (GKE, BigQuery, Cloud Storage, IAM) Hands-on with Dataproc (Spark/Hadoop) and Composer (Airflow) Strong experience in Angular Expertise in distributed systems, scalability, and system design...Remote work
- ...~ Solid Angular frontend experience ~ Hands-on AWS experience (Lambda, ECS, S3, RDS, EMR) ~ Big Data experience is required Hadoop, Spark, Presto, or EMR ~ Strong SQL skills with performance tuning ~ Experience with scripting (Python, Unix shell, Groovy,...Contract work
$65.05 per hour
...a quantitative discipline. Preferred Skills: Background in enterprise stress testing. Experience with Elastic, Hadoop, Teradata, or other distributed big data ecosystems. Knowledge of cloud or distributed computing environments. Prior work...Contract work- ...Familiarity with market risk concepts including VaR, Greeks, scenario analysis and stress testing. ~ Hands on experience with Hadoop, Spark. ~ Proficiency on Git, Jenkins and CI/CD pipelines. ~ Excellent problem-solving skills and strong mathematical and analytical...Contract work
- ...Foundation : Strong understanding of statistics, linear algebra and calculus as applied to ML. Big Data : Experience with Spark, Hadoop or similar distributed computing frameworks. Cloud Platforms : Familiar with AI services on AWS, Azure or Google Cloud...
- ...processes and data pipelines Knowledge of cloud platforms (AWS, Azure, or GCP) Familiarity with big data tools (Spark, Hadoop) Experience in a specific domain (finance, healthcare, e-commerce, marketing, etc.) What We Offer Opportunity to work...
- ...team today, while setting you up with the mentorship you need to grow your skill set moving forward.Our clients' projects include Hadoop (both open-source and Cloudera Enterprise packagings) and Cassandra databases operating in clusters spanning dozens of servers and...
- ...the value of technology and build a more sustainable, more inclusive world. Job Description We are looking for an experienced Hadoop Administrator MapR to manage and support our productiongrade MapR Hadoop clusters The ideal candidate will have handson...Permanent employmentFull timeLocal area
- ...This role requires a strong background in programming languages like Java and C++, as well as familiarity with Teradata Aster and Hadoop platforms. Candidates with a BA/BS in Computer Science and experience in data mining will excel in this fast-paced environment. Join...
- ...Hadoop/ETL developer Location: Plano, TX / Charlotte, NC / Kennesaw, GA (3 days onsite 2 days remote) Contract Required Qualifications: • Strong working knowledge of ETL, database technologies, big data and data processing skills • 3+ years of experience...Contract workRemote workFlexible hoursWeekend workWeekday work
