Get new jobs by email
$16 per hour
...change fields or find a new opportunity in the warehouse industry? Look no further and apply today! Location: 46 Isidor Ct. Sparks, NV Pay: $16.00/hour to start Shift: Monday-Friday 7:00am-4:00pm Benefits: ~ Potential Monthly Departmental...SuggestedFull timeTemporary workMonday to FridayShift work- ...managing user access, roles, and permissions using IAM, SCIM , and role-based access control (RBAC). 8 Required Proficiency with Apache Spark concepts, including performance tuning and troubleshooting. 8 Required Experience integrating Databricks with cloud storage services...SuggestedContract work
- ...engineering are crucial, along with knowledge of SQL and NoSQL databases. Big Data Technologies: Familiarity with platforms like Apache Spark and OpenSearch is often necessary for handling large-scale data. Mathematics and Statistics: A strong foundation in linear algebra,...SuggestedWork at officeLocal area
- ...tracking, model packaging, and deployment. Advanced experience with PySpark and distributed data processing. Experience with AWS EMR for Spark cluster management and large-scale data transformations. Solid understanding of MLOps concepts: CI/CD for ML, feature stores,...Suggested
- ...of experience in design, implement and maintain Python, Microservice, Spring Boot, Spring Cloud, RESTful services, Kafka, Swagger, Spark, MongoDB, Flink, Ab Initio, Docker, Kubernetes technologies Strategize, Design data architecture for specific business problems...SuggestedPermanent employmentFull timeFlexible hours
- ...Experience with NoSQL databases such as MongoDB or DocumentDB. ~ Experience with Kafka or similar message frameworks such as RabbitMQ or Spark. ~ Experience with CI/CD pipelines such as GitLab, Jenkins, or Harness. ~ Written and verbal communication skills. ~...SuggestedHourly payLocal areaRemote work
- ...or Scala for ML development. Knowledge of cloud-based ML platforms (AWS, Azure, GCP). Experience with big data processing (Spark, Hadoop, or Dask). Ability to scale ML models from prototypes to production. Strong analytical and problem-solving skills....SuggestedFull timeContract workRemote work
- ...Preferred: Experience working with data pipelines integrated with machine learning models. Exposure to big data technologies such as Spark, Kafka, Flink, or similar. Familiarity with supervised machine learning, labelled datasets, and feedback loop systems. Experience...Suggested
- ...specifically AWS. Preferred qualifications: Experience with MLOps tools (e.g., MLflow, Kubeflow). Knowledge of big data technologies (Spark, Hadoop, Databricks). Background in NLP, computer vision, or other advanced AI techniques. Relevant certifications (Coursera, edX,...Suggested
- ...Google Cloud Platform) Knowledge of MLOps tools (Docker, Kubernetes, CI/CD pipelines) Familiarity with SQL and big data tools (Spark, Hadoop) Qualifications: Bachelor's or master's degree in computer science, AI, Data Science, or related field 5 6 years of...Suggested
- ...ingestion, transformation, and data reliability Real-time systems Build streaming solutions using Kafka or Azure Event Hubs, Use Spark Structured Streaming for high-volume data processing API and integration Lead integrations using Spring Boot, Dell Boomi, and...Suggested
- ...Years of Exp.: 8+ years exp. Skills: 8+ years of ai/ml, ml algorithms, big data, sql, testing, tuning, python, java or r, ml framework; spark, matlab, data bricks, tensorflow, or scikit-learn, chatbots, nlp, image/data classification, sentiment analysis, regression analysis...SuggestedLocal areaRemote work
- ...preprocessing, and feature engineering ~ Experience with SQL and/or NoSQL databases ~ Familiarity with big data tools such as Apache Spark and OpenSearch ~ Solid foundation in mathematics and statistics (linear algebra, calculus, probability) Preferred...SuggestedFull timeContract work
- ...Unity Catalog). Clear separation of compute vs. serving layers in distributed architectures. Low-latency API strategy where Spark is insufficient (e.g., leveraging optimized services or caching). Caching strategies to accelerate reads and reduce compute...Suggested
- ...managing user access, roles, and permissions using IAM, SCIM, and role-based access control (RBAC): 8 years. Proficiency with Apache Spark concepts, including performance tuning and troubleshooting: 8 years. Experience integrating Databricks with cloud storage...SuggestedCurrently hiringLocal area
- ...domain . The ideal candidate will design, architect, and lead scalable analytics and data engineering solutions leveraging Databricks, Spark, and AWS services while ensuring compliance with healthcare regulations such as HIPAA . Key Responsibilities: Design and...Contract workRemote work
- ...and APIs supporting data-intensive applications. Build and optimize distributed systems leveraging big data technologies such as Spark, Hive, and Presto. Develop high-performance APIs that enable interactive access to large-scale datasets. Partner with...
- ...backend, database, services Agile methodologies: Scrum, Jira Testing for data, applications, and visualizations Big Data: Apache Spark Cloud & Containers: AWS S3, OpenShift, Docker, Kubernetes, IBM Cloud Pak for Data Data Virtualization: TIBCO Data...
- ...MLOps: MLflow, Kubeflow, Vertex AI, Sagemaker, CI/CD, Docker, Kubernetes, observability tooling Data Engineering: Airflow/Prefect, Spark/Ray, data validation and pipelines, vector stores Software Engineering: Python expertise, API design, cloud architecture,...Contract work
- ...scikit-learn, NLTK, Azure ML (optional), Amazon Web Services EC2. Experience with scalable data engineering frameworks such as Apache Spark and orchestration frameworks such as Airflow, and/or experience with semantic search. Expert knowledge in conducting data analysis...Contract workWork experience placement
- ...Knowledge of ETL/ELT frameworks and data pipeline architecture Familiarity with distributed data processing tools (e.g., Spark) Good to Have: Experience with Airflow or other orchestration tools Knowledge of CI/CD pipelines Experience with...Long term contractLocal area
$147k - $179k
...PyData stack (numpy, scikit-learn, pandas, etc.) Is interested in a wide range of ML solutions, including established tools (e.g. Spark, Kubernetes, Airflow, MLFlow), emerging tools (like Chalk, BentoML, or DVC), and developing in-house tools You get bonus points...Full timeWork experience placementRemote workWork from homeFlexible hours- ...population scale Hands-on experience with LLM APIs and orchestration frameworks Strong SQL and experience with distributed data processing (Spark, Dask) Experience with vector databases and ANN search systems (FAISS, Pinecone, etc.) Expertise in ML lifecycle management (MLflow...Work experience placement
- ...Experience managing user access, roles, and permissions using IAM, SCIM, and role-based access control (RBAC). Proficiency with Apache Spark concepts, including performance tuning and troubleshooting. Experience integrating Databricks with cloud storage services (e.g....Local area
- ...with natural language processing (NLP), computer vision, or other AI techniques. Familiarity with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Strong analytical skills and the ability to work with complex datasets....Remote work
- ...autonomous workflows. Databricks & Lakehouse Engineering: Develop and optimize ML and GenAI workloads using Databricks, including Spark based data pipelines, feature engineering, and model training/inference on the Lakehouse platform. Unity Catalog & Governance:...
- ...ideal candidate will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc Responsibilities: Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e....Work experience placementLocal area3 days per week
- ...microservices, Spring Batch Strong experience with Google Cloud Platform (GKE, BigQuery, Cloud Storage, IAM) Hands-on with Dataproc (Spark/Hadoop) and Composer (Airflow) Strong experience in Angular Expertise in distributed systems, scalability, and system...Remote work
- ...forecasting, or optimization models Experience with IoT/industrial data environments Knowledge of data engineering tools (e.g., Spark, Databricks) Exposure to governance, risk, and compliance in AI Experience or familiarity with AI-assisted development...
$61 - $66 per hour
...Data ITSM Tools: ServiceNow , Remedy , IBM Netcool . Databases: Oracle , DB2 , SQL Server , MongoDB , Hadoop/Cloudera , Spark , Teradata . Foundational AI Knowledge Understanding of common AI/ML concepts (classification, regression, clustering, anomaly...Hourly payContract workCasual work3 days per week