Get new jobs by email
- ...LLM-powered financial applications . You will be responsible for developing scalable, low-latency ingestion systems that handle millions of... ...financial documents . Requirements : Python, LLM. Knowing Spark and Databricks is a BIG plus. Typescript is also nice. Experience...SuggestedFull timeContract workWork at officeImmediate startRemote work
- Hadoop Application Developer - Permanent Position Our client is a leading manufacturer and supplier of metal components for the automotive industry, recently ranked 13th of the 150 Top North American Suppliers in “Automotive News.” A growing and vital company, client provides...SuggestedPermanent employment
- Senior Database Engineer(Elastic/Mongo/Hadoop) Senior Database Engineer(Elastic/Mongo/Hadoop) Workplace Type : Remote - Region : San... ...instances with TLS/SSL and/or LDAP • Candidate must also be able to develop automated solutions for ad-hoc script execution requests, ad-...SuggestedRemote work
Senior Database Engineer(Elastic/Mongo/Hadoop) (San Francisco) at CatchProbe Intelligence Techn[...]
Overview Senior Database Engineer(Elastic/Mongo/Hadoop) - San Francisco, CA. Workplace Type: Remote - Region: San Francisco, CA. Job... ...SHA1, X509, LDAP) including TLS/SSL and/or LDAP reconfiguration. Develop automated solutions for ad-hoc script execution, ad-hoc report...SuggestedFull timeRemote work$155k - $215k
...more exclusive features. At Imply, we are on a mission to help developers unleash the power of real-time analytics. Our unique database... ...of large-scale distributed systems and databases such as Hadoop, Spark, Presto, ElasticSearch. A history of open-source contributions...SuggestedRemote job16 hoursFull timeHome office$194 per hour
...infrastructure is reliable, scalable, and developer-friendly as we continue to power... ...proficiency in Python and SQL. Hands-on Spark development experience. Expertise with modern... ...data‑infrastructure technologies such as Hadoop, Hive, Kafka (or similar streaming platforms...SuggestedWork from homeFlexible hours- ...AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications... ...computing technologies such as Apache Spark, Ray, etc. for data exploration and ML... ...computing technologies (e.g., Hadoop, Spark, Kafka). Familiarity with managed...Suggested
$170k - $215k
...data architectures. Build and maintain scalable data pipelines using technologies such as GCP (or AWS), BigQuery, Python, and Spark. Develop and manage service-oriented and event-driven architectures, utilizing tools like Pub/Sub and Kafka. Collaborate with machine learning...SuggestedFull timeVisa sponsorship$180k
...company. We own and manage core systems including Apache Kafka, HDFS, Spark, Flink, and Trino, enabling real-time ML pipelines, feed ranking... .... Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop*in production. Strong debugging, profiling, and performance optimization...SuggestedTemporary workH1bWork at officeWork from homeWork visa- ...design patterns and OO design principles • Experience with Java, Scala, Python. • Experience with either distributed computing (Hadoop/Spark/Cloud) or parallel processing (CUDA/threads/MPI) • Expertise in design pattern (UML diagrams) and data modeling of large scale analytic...SuggestedImmediate start
$161.79k - $240.12k
...explore how familiar experiences can be enhanced and innovations developed through the integration of leading-edge technology, science,... ..., Kubernetes). Experience with big data technologies (e.g., Hadoop, Spark) and data warehousing solutions. Strong ability to...Suggested3 days per week- ...on some of the most exciting big data open source technologies (Spark, Kafka, Kubernetes, etc.), at the scale of hundreds of petabytes... ...Spark ~ Deep knowledge of big data technologies (e.g. Spark, Hadoop, Parquet/ORC, Flink) ~ Experience in leading cross-team engineering...SuggestedPart timeWork at officeLocal areaRelocationRelocation package
$130.6k - $192k
...orchestration and tracking frameworks such as Metaflow, MLflow, Dagster, or Airflow. Large-Scale Data Processing - Knowledge of Spark, Hadoop, or other distributed data processing technologies. Monitoring & Observability - Proficiency with metrics and alerting solutions...SuggestedHourly payTemporary workWork at officeLocal areaRemote workFlexible hours$289.46k - $338.27k
...- all created by our global community of developers and creators. At Roblox, we’re building the... ...data processing systems. Work with Spark and Kubernetes - the engines upon which we... ...processing technologies tools such as Spark, Hadoop, Hive, Beam, or Flink. Drive to learn new...SuggestedFull timeWork experience placementWork at officeLocal areaMonday to Friday$375 per month
...high level of technical productivity, reliability, and simplicity by developing in Java and leveraging tools/frameworks like Docker, Kubernetes, Terraform, Java, Gradle, Jenkins, GCP, Hadoop and Spark, SQL and err on the side of shipping fast and iterating. Your team will...SuggestedFull timeWork at officeWork from homeWorldwideHome officeFlexible hoursNight shift- ...End Data Engineer LOCATION: San Bruno, CA DURATION: 6 to 12+ months RATE: DOE Duties: Looking for a backend data engineer with Apache Spark, Hive and Airflow knowledge. Must-haves: Excellent knowledge with SQL and Hive (HiveQL) Ability to build data pipelines using...
- ...project. They are also software engineers who use Python to develop Kubernetes operators and Linux open source infrastructure-as-... ...cloud infrastructure solutions like OpenStack, Kubernetes, Ceph, Hadoop and Spark either On-Premises or in Public Cloud (AWS, Azure, Google...Full timeFreelanceWork at officeLocal areaRemote workWorldwide
- ...immediate past two years. Job Description Job Title: Sr. Java Developer Location: San Francisco, CA Duration: 6-12 Months Contract... ...or Go; experience working with Kafka; experience working with Spark, Hadoop, Storm or any realtime stream analysis platform Qualifications...Contract workLocal areaImmediate start
$138.9k - $186.2k
...& Technology is a global organization of engineers, product developers, designers, technologists, data scientists, and more - all working... ...), data engineering, SQL, and big data technologies (e.g., Spark, Hadoop). Testing Experience : Proven experience in automated testing...- ...object-oriented programming and design skills in Python or Java; Hands-on experience with big-data technologies such as SQL, Hive, Hadoop and Spark; Experience with-or a strong desire to learn-statistical modeling and machine learning techniques; Excellent communication...Full time
- ...unparalleled technical expertise from a distinguished team of developers with an extensive understanding of the banking and payments... ...Utilize open-source distributed data processing frameworks (e.g., Hadoop, Spark) to handle large-scale data transformations and batch...Full timeSummer workInternshipFlexible hoursShift work
$157k - $230k
...enthusiastic about building new technologies. Responsibilities Design, develop, and support a petabyte-scale cloud database that is highly... ...systems, and large-scale data processing solutions like Hadoop and Spark. Large scale distributed systems, transactions and...Full timeWork at officeFlexible hours- ...complex challenges. About the Role Our client is seeking a Python Developer to develop and optimize applications that support machine... ...experience with big data processing technologies, including Spark, Hadoop, or Databricks. Proficiency in cloud platforms (AWS, GCP, or...Flexible hours
- ...conceptual model for a data domain inside the central data platform. Develop and maintain data schemas and models for products within... ...in Cloud (GCP, AWS, ..), SQL and NoSQL databases, Java, Hadoop, Python, Spark, Data Lakes, Data Warehouses, and Data Marts preferred...Full timeWork experience placementImmediate startRemote workWork from homeFree visaFlexible hours
- ...in the fight against heart disease. What You’ll Do Develop robust ETL (Extract, Transform, Load) processes to... ...with distributed computing frameworks (e.g., Ray, Spark, Dask) and supporting infrastructure (e.g., Hadoop, Docker, Kubernetes). Experience with at least one...
- ...problems in data systems & ML infrastructure. You will: Develop robust ETL (Extract, Transform, Load) processes to... ...with distributed computing frameworks (e.g. Ray/Spark/Dask) and supporting infrastructure (e.g. Hadoop, Docker, Kubernetes). Competency with at least one...Second jobLocal areaWorldwideRelocation
$178k - $195.8k
...and serve customers across North America. Responsibilities Develop and maintain the data infrastructure and services that are... ...data processing. Familiarity with big data frameworks (Hadoop, BigQuery, Dask, Spark, Kafka, etc.). What We Value We work in the service of others...Seasonal workRemote workFlexible hours$140k - $200k
...Qualifications: 5+ years of professional software development experience with a focus on big data technologies Experience with Hadoop, Spark, Hive, and/or other big data technologies Comprehensive computer science fundamentals in coding, object-oriented programming, data...Full timeContract workFor contractorsFor subcontractorH1bWork at officeRemote workFlexible hours$200k - $250k
...modern and reliable tools and technologies. SedonaDB and Apache Spark are built in Java and Scala, with deeply optimized Python wrappers... ...-source projects such as Apache Spark, Apache Iceberg, Apache Hadoop, Apache Hive, DeltaLake, or Trino, and has a passion for open-source...Full timeWork at officeRemote work$192k - $260k
...2013 by the original creators of Apache Spark, Databricks has grown from a tiny corner... ...customers successful on our platform. We develop and operate one of the largest scale software... ..., and big data systems (Apache Spark, Hadoop). Pay Range Transparency Databricks is committed...Work at officeLocal areaWorldwide