Get new jobs by email
- ...Role: Lead Hadoop Admin/Developer Location: Foster City CA (Onsite - Hybrid) Job Description: Basic to intermediate experience with Spark Good experience with SQL . Should be able to understand and implement performance optimization....Suggested
- ...professional in Palo Alto, California. The role involves managing Hadoop clusters and working with clustered services and distributed... ...systems, and expertise in the Hadoop ecosystem, including Hive and Spark. The ability to work in diverse, global teams and strong...Suggested
- ...expertise in data engineering. In this role, you will design and implement scalable data solutions, working with technologies like Spark, Python, and cloud platforms such as AWS or GCP. The ideal candidate has over 5 years of experience in back-end development and strong...SuggestedRemote work
- ...Hadoop Developer Foster city ,CA (Day 1 onsite ) No of Positions : 2 Job Description : Note : Experience in Hadoop (Hdfs, Hive, Sqoop, Oozie, Mapreduce) and Spark Scala is needed. Essential Job Functions • Responsible for applying...SuggestedWork experience placement
- ...design and maintain data pipelines and back-end systems. The ideal candidate will have over 5 years of experience, deep expertise in Spark, Python, and SQL, and will work closely with data scientists to create effective data-driven solutions. This position is based in...SuggestedHourly payContract work
- ...End Engineer to design and maintain scalable data pipelines and back-end systems. You'll utilize tools such as Spark, Python, and cloud technologies to develop impactful data-driven solutions, collaborating with data scientists and engineers. Candidates should possess...Suggested
- ...growing team. In this role, you will be responsible for designing, developing, and maintaining scalable and reliable data pipelines and back-... ...Design, develop, and maintain data pipelines using Spark, Python, Scala, and Java. Write efficient and optimized SQL queries...SuggestedHourly payPermanent employmentContract workLocal area
- ...data engineering to design and maintain data pipelines and back-end systems. The successful candidate will work with technologies like Spark, Python, and AWS. Responsibilities include writing SQL for ETL processes, implementing data solutions and collaborating with...Suggested
- A tech company is looking for a Senior Software Engineer with 8 to 11 years of experience, focusing on Hadoop management and automation. You will be responsible for setting up and managing Hadoop clusters, providing 24x7 support, and performance tuning. Candidates should...Suggested
- ...a skilled Back-End Engineer to design and maintain data pipelines and back-end systems. Candidates should have strong experience in Spark, Python, and SQL, with a focus on cloud technologies like AWS and GCP. This position requires collaboration with data scientists for...SuggestedHourly pay
- ...leading IT consulting firm in San Francisco is seeking a Senior Core Data Engineer with strong Java programming skills and experience in Hadoop MapReduce and NoSQL databases like HBase or Cassandra. The role involves ingesting large datasets, processing them, and serving the...Suggested
- ...Engineer in San Jose, CA. The role requires over 8 years of experience, especially in Java, Python, and Scala, alongside strong skills in Spark and Machine Learning. Key responsibilities include gathering and processing large datasets, collaborating with engineering teams,...Suggested
- Senior Database Engineer(Elastic/Mongo/Hadoop) Senior Database Engineer(Elastic/Mongo/Hadoop) Workplace Type : Remote - Region : San... ...instances with TLS/SSL and/or LDAP • Candidate must also be able to develop automated solutions for ad-hoc script execution requests, ad-...SuggestedRemote work
$79.2k - $178.1k
...: Strong data engineering, HPC, and data science experience. Spark, PySpark, Delta Lake, Parquet, Feature Extraction, MLOps, Flink,... ...Appliance (BDA). Proficiency in big data technologies such as Hadoop, Spark, Kafka, and Nosql Design and implement scalable,...SuggestedTemporary workFlexible hours- ...Demonstrated programming proficiency and experience with 1 of the following (Java, C++, Python, Scala, R, SAS, SQL, Hadoop, HTML, SPSS, VBA, Tableau, Spark, Angular, ReactJS), as well as systems design/development, and information technology course work Exposure to...SuggestedInternshipWorldwide
- ...Professional experience with Java/Scala based analytics platforms (Hadoop, Spark, Hive, Trino, etc.). Professional experience with Splunk,... ...the ability to manage work to tight deadlines. Experience developing solid and scalable technical solutions to business problems....
- ...fraudsters in real-time. The platform team is responsible for developing the architecture that makes real-time UML possible. We are... ...time systems and features Use big data technologies (e.g. Spark, Hadoop, HBase, Cassandra) to build large scale machine learning pipelines...
$124.7k - $208.85k
...and 2 years in data engineering, possess strong problem-solving skills, and have expertise in Big Data technologies like AWS, Spark, and Hadoop. The company values diversity and collaboration. Salary range is $124,700 - $208,850 annually. #J-18808-Ljbffr Poshmark, Inc.$190k - $260k
...large-scale distributed systems. Professional experience developing in Java, C++, or Go . Practical knowledge of... ...distributed storage, caching, or data-access layers (e.g., Spark, Presto, Hadoop, Kubernetes). Bachelor’s or advanced degree in Computer...Full time- ...Engineers starting in Summer 2026, lasting 12 weeks. Interns will gain hands-on experience with big data technologies such as Hadoop and Spark, while working on impactful projects within the Data Platform E-Commerce team. This internship supports professional growth and...Summer workInternshipRemote work
- ...projects focused on big data technologies and collaborative software development. Candidates should have extensive experience with Spark, Hadoop, and cloud platforms such as GCP or AWS. The position offers flexibility with remote work options and a comprehensive benefits...Remote work
- ...architecture for Walmart Omni platforms Design/Develop Applications in No-SQL database such Cassandra,... ...platforms using Hortonworks, Cloudera CDH, Apache Hadoop as a Data Storage and processing systems using Apache Spark and Presto Analytics Engine Directs root cause analysis...Work at officeLocal area1 day per week
$181.1k - $272.1k
...Preferred Qualifications Experience with Swift and a passion for developing high-quality applications across Apple’s diverse platforms (... ....). Experience with large-scale distributed systems (e.g., Spark, Hadoop, Kafka, Kubernetes). Experience administering and optimizing...Relocation$102.4k - $181.2k
...issues related to Databricks products - including Spark core internals, Spark SQL, Delta, DLT, and Model... ..., Fivetran). 5 years of hands‑on experience in developing two or more Big Data technologies, such as Spark & Hadoop, Lakehouse architecture - such as Delta, Data...For contractorsWork at officeLocal areaWorldwide- Job Title: Spark Developer (Search Integration) Position Type: Contract. Location: Pleasanton, CA - Onsite- Hybrid - Looking for resources who can work 3 days hybrid onsite in Pleasanton, CA Note We are looking for a Spark Developer with OpenSearch/Algolia expertise...Contract work
$216k - $324k
...Django or similar), with experience using big data tools such as Spark/Hadoop and ORMs like SQLAlchemy/Alembic. Production‑grade cloud... ...before. We see limitless potential for the technology we're developing to nurture personalized experiences in ecommerce and beyond....Work at officeLocal area$120k - $300k
...Data Engineers and Analysts in designing, developing, and delivering scalable data/software... ..., Vertica, Oracle) and tools like Kafka, Spark (Python/Scala); hands‑on proficiency in Python... ...distributed frameworks (e.g., Spark, Hadoop). Ability to convert business needs into...Hourly payFull timeTemporary workFlexible hours$197k - $266.5k
...development experience with work experience in developing DB schemas, creating ETLs, and familiarity with MPP/Hadoop systems. Advanced experience with scripting language... ...stack of technologies - mainly Hive, Hive on Spark Big Data: Designing, Building and maintaining...Work experience placementShift work- Mega Cloud Lab is seeking a Data Lead Engineer for onsite work in Fremont, CA. Candidates must have over 12 years of experience in enterprise data platforms with strong skills in Azure Data Factory, Databricks, and PySpark among other technologies. Responsibilities include...
$81.4k - $151.8k
...testing and data warehousing concepts Hands on experience with SQL, NoSQL, Python, Spark, Scala Experience with SSIS, Glue, cloud storage and computing services (e.g. AWS, Azure), Hadoop, Netezza Experience with structured, semi structured and unstructured datasets in...Part timeLocal area

