Average salary: $110,109 /yearly
More statsGet new jobs by email
- ...Minimum 5 years’ experience in field of data engineering involving analytics-focused data warehouse environments such as AWS, Snowflake, Hadoop, Oracle, etc. Minimum 2 years working experience in AWS utilizing services such as S3, AWS CLI, and DynamoDB Deep working...SuggestedWork experience placement
$96.8k - $251.6k
...Deep expertise in big data processing, stream and batch pipelines, unstructured and structured storage, and technologies like Hadoop, Spark, and Kafka. Demonstrated technical ownership of large-scale systems with a focus on performance, scalability, and maintainability...SuggestedTemporary workFlexible hours$79.1k - $158.2k
...multiple products and teams. You will work on the design and operation of large-scale, stateful distributed platforms, including Hadoop ecosystem components (HDFS, YARN, HBase) deployed on Oracle Big Data Service (BDS), Kafka, and Storm. These multi-tenant platforms...SuggestedTemporary workImmediate startFlexible hours$96.8k - $223.4k
...Oracle Cloud Infrastructure Big Data Service (BDS) and Big Data Appliance (BDA). Proficiency in big data technologies such as Hadoop, Spark, Kafka, and Nosql Design and implement scalable, secure, and efficient complex big data architectures. Manage...SuggestedTemporary workFlexible hours- ...design, development, & testing • Design & Development of applications in Java/J2EE/Python/Spring boot/PCF/Unix/Power BI/Cassandra/Kafka/Hadoop • Experience in cloud/Edge Hosting of services & Apps. Interoperability of Apps, services between cloud & Data centers. • Designing...SuggestedFor contractorsLocal areaRemote work
$72.7k
...experience in Lean/Six Sigma ~1 year of experience in Health Insurance or Healthcare Industry ~ Experience with SQL, Python, Spark and Hadoop SKILLS SQL Python Spark Hadoop EDUCATION Required ~ Bachelor's degree in Business Administration, Business...SuggestedFor contractorsWork at officeLocal areaRemote work$99k - $180k
...Understanding and/or Proficient with tools for analyzing large data sources with computationally intensive steps (e.g., SQL, parallelization, Hadoop, Spark) and producing interactive outputs (e.g., Shiny, Tableau). Understanding and/or Experience with SDTM implementation,...SuggestedRemote work$77k - $202k
..., Vault data model, graphs, star & snowflake schemas); Applying knowledge and relevant work experience in Big data engineering (Hadoop, Spark, Scala, Kafka) and ETL pipeline development tools (tools: IICS/AWS Glue/Matillion/Abinitio SSIS/SnapLogic); preferable in P&...SuggestedFull timeWork experience placementH1b
