Average salary: $110,109 /yearly
More statsGet new jobs by email
- ...Minimum 5 years’ experience in field of data engineering involving analytics-focused data warehouse environments such as AWS, Snowflake, Hadoop, Oracle, etc. Minimum 2 years working experience in AWS utilizing services such as S3, AWS CLI, and DynamoDB Deep working...SuggestedWork experience placement
- ...work with the business owners to fix the issues.* Ability to work in different database technologies including Google Cloud Platform, Hadoop, etc.* Experience building visualizations using QlikSense, or other visualization tools* Ability to write sophisticated queries...Suggested
- Job Summary Primary and Secondary skill set - GCP- (BigQuery, DataFlow, DataFusion, DataProc), Hadoop Eco system. Roles/Responsibilities Work as part of an implementation team from concept to operations, providing deep technical subject matter expertise for successful...SuggestedFlexible hours
- ...experience designing and building large-scale data, machine learning, and analytics applications and pipelines. Proficiency with Spark, Hadoop, Hive, and SQL 2+ years of INDUSTRY experience in analytics or ML or data science field. 2+ years of INDUSTRY experience with...SuggestedFull timeWork experience placement
- ...development and/or Tableau reporting development Desired Skills: Experience with following database technologies: Postgres, RedShift, Hadoop and/or Oracle Payer domain knowledge on Provider, Billing and/or Care Management Solid working knowledge of Microsoft tools and...SuggestedWork experience placement
- ...Technical Business Analyst Location: Dearborn, MI Duration: Long Term Job Description: Responsible for production support of current Hadoop platform in terms of Data Acquisition - Overall knowledge of data acquisition processes and methods Data Ingestion - Development...Suggested
- ...testing. Responsibilities: Design and Development of applications in Java/J2EE/Python/Spring boot/PCF/Unix/Power BI/Cassandra/Kafka/Hadoop. Experience in cloud/Edge Hosting of services and Apps. Interoperability of Apps and services between cloud and Data centers. Top...SuggestedWork at officeLocal area
$120k - $150k
...Supermicro® is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/ Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are the #5 fastest growing company among the Silicon Valley Top...SuggestedWorldwide- ...explorer, Azure Data Factory. Data Warehouse Solutions: Redshift, Snowflake, Postgres, Data Lake. Big Data technologies: Azure, AWS, Hadoop, Spark, Hive, Kafka, Flume, NoSQL stores (HBase, Cassandra, DynamoDB, MongoDB). Cloud storage: S3, GCS, ADLS, Blob. Machine...SuggestedTemporary workWork experience placement
- ...databases such as MySQL, PostgreSQL, SQL Server, Oracle, DB2, or MongoDB Exposure to analytics or data tools such as SPSS, SAS, Tableau, Hadoop, or Spark, depending on practice needs Experience collaborating with others on team‑based software projects Emerging Technology...SuggestedInternshipWork from home3 days per week
- ...databases**, such as Postgres, Oracle, MySQL, Cassandra, MongoDB* Experience working with **big data and streaming technologies**, such as Hadoop, Spark, Kafka, and related tooling* Experience designing and delivering **Software-Defined Vehicle (SDV) services*** Proficiency...SuggestedLocal areaWork from homeRelocationRelocation package
- ...Liferay, CoreOS, or other CMS/portal technologies. Experience in the Automotive or Customer Analytics domain is a plus. Exposure to Hadoop ecosystem and big data technologies. Understanding of design patterns and system optimization techniques. Certifications: Java...Suggested
$140k - $160k
...Python / Scala / Java / C#) Minimum 3 years of Big Data and Big Data tools in one or more of the following: Batch Processing (e.g. Hadoop distributions, Spark), Real time processing (e.g. Kafka, Flink/Spark Streaming) Minimum of 2 years' experience with AWS or...SuggestedLocal area- ...Computer Science, Information Systems, Data Engineering, or a related field. Strong proficiency in big data technologies : Apache Spark, Hadoop, Hive, Kafka, Delta Lake, or equivalent. Hands-on experience with cloud data platforms : AWS (S3, Glue, EMR), Azure (Data Lake,...SuggestedRemote jobFull timeWorldwideFlexible hours
$96.8k - $251.6k
...Deep expertise in big data processing, stream and batch pipelines, unstructured and structured storage, and technologies like Hadoop, Spark, and Kafka. Demonstrated technical ownership of large-scale systems with a focus on performance, scalability, and maintainability...SuggestedTemporary workFlexible hours$79.1k - $158.2k
...multiple products and teams. You will work on the design and operation of large-scale, stateful distributed platforms, including Hadoop ecosystem components (HDFS, YARN, HBase) deployed on Oracle Big Data Service (BDS), Kafka, and Storm. These multi-tenant platforms...Temporary workImmediate startFlexible hours$96.8k - $223.4k
...Oracle Cloud Infrastructure Big Data Service (BDS) and Big Data Appliance (BDA). Proficiency in big data technologies such as Hadoop, Spark, Kafka, and Nosql Design and implement scalable, secure, and efficient complex big data architectures. Manage...Temporary workFlexible hours- ...design, development, & testing • Design & Development of applications in Java/J2EE/Python/Spring boot/PCF/Unix/Power BI/Cassandra/Kafka/Hadoop • Experience in cloud/Edge Hosting of services & Apps. Interoperability of Apps, services between cloud & Data centers. • Designing...For contractorsLocal areaRemote work
$72.7k
...experience in Lean/Six Sigma ~1 year of experience in Health Insurance or Healthcare Industry ~ Experience with SQL, Python, Spark and Hadoop SKILLS SQL Python Spark Hadoop EDUCATION Required ~ Bachelor's degree in Business Administration, Business...For contractorsWork at officeLocal areaRemote work$99k - $180k
...Understanding and/or Proficient with tools for analyzing large data sources with computationally intensive steps (e.g., SQL, parallelization, Hadoop, Spark) and producing interactive outputs (e.g., Shiny, Tableau). Understanding and/or Experience with SDTM implementation,...Remote work$77k - $202k
..., Vault data model, graphs, star & snowflake schemas); Applying knowledge and relevant work experience in Big data engineering (Hadoop, Spark, Scala, Kafka) and ETL pipeline development tools (tools: IICS/AWS Glue/Matillion/Abinitio SSIS/SnapLogic); preferable in P&...Full timeWork experience placementH1b
