Get new jobs by email
- ...Expertise: Hands-on experience with LLMs (OpenAI, Gemini, Claude, Open-source models). Strong knowledge of TensorFlow, PyTorch, Spark ML. Experience with Distributed Systems & Scalable Architectures. For applications and inquiries, contact: hirings@...Suggested
- ...GitHub ~4+ Years of hands-on experience with many of the following components: API Development Apigee Python FastAPI Framework Spark Streaming, Kafka SQL, JSON, Avro, Parquet Java, Python, or Scala ~ Certifications: Google Cloud Platform Cloud Professional Data...SuggestedContract work
- ...Analytics, Azure API Management (APIM), Azure DevOps. Databases: SQL Server, Oracle, PostgreSQL, Delta Lake Big Data: Apache Spark Version Control: GitHub, Git, Azure DevOps Visualization: Power BI & Integration with REST APIs for custom dashboards....Suggested
- ...and resource utilization). Big Data & Streaming Support: Manage and optimize distributed data frameworks, ensuring the health of Spark, Flink, and Kafka pipelines. Infrastructure as Code: Support deployments across AWS Cloud and Kubernetes (EKS) environments....SuggestedWork at officeHome office
- ...information retrieval concepts, relevance metrics, and evaluation methods. Familiarity with large-scale data processing (e.g., Spark, Hadoop) is a plus. Key Responsibilities: Design, develop, and implement search ranking models using Learn to Rank approaches...SuggestedImmediate start
- ...on experience with ETL/ELT tools (e.g., Informatica, Talend, Apache NiFi). Experience with big data technologies (e.g., Apache Spark, Hadoop ecosystem). Proficiency in data warehousing concepts (e.g., Snowflake, Redshift, BigQuery). Experience with cloud...Suggested
- ...Location: Dallas TX Onsite for all 5 days/week Onsite interview. Immediate start Job Description Hands on exp with Python, PySpark/spark, NoSQL (MongoDB), Data bricks, data lake, azure cloud Advanced proficiency in Python , including experience with asynchronous...SuggestedHourly payImmediate start
- ...first principal modeling (kinetics, thermodynamic, physical models) Expertise in data analytics software and tools: Python, R, Spark, Json. Expertise in Machine Learning and Deep Learning, e.g. classification, regression, clustering, feature engineering, neural...SuggestedContract work
- ...of exp. Data Scientist, with RAG AI, LLM, with strong Databricks and Snowflake exp. Must have skills. Python, TensorFlow ,PyTorch, Spark, data analysis ,Datamodeling. Role Summary We are seeking a highly skilled Senior Data Scientist with expertise in Machine...Suggested
- ...Knowledge of Java is a plus for developing custom NiFi processors. Big Data and Streaming Technologies: Experience with Kafka, Hadoop, Spark, or other big data ecosystems is beneficial. Understanding of messaging systems and streaming data pipelines. System and...Suggested
- ...develop scalable data transformation pipelines using DBT Cloud Architect and implement Databricks-based data solutions (Delta Lake, Spark) Build and optimize data models (star/snowflake schemas) for analytics Develop ETL/ELT pipelines using modern data stack...SuggestedContract workRemote work
- ...Proficiency with containerization (Docker, Kubernetes) and CI/CD pipelines ~ Experience with data pipeline tools (Airflow, Prefect, Spark, Ray) and vector stores ~ Familiarity with cloud architecture, API design, and microservices patterns For applications...SuggestedContract workVisa sponsorship
- ...project delivery and team performance. Mandatory Skills: Proven expertise in Azure Databricks, including experience with Spark, data pipelines, and data lakes. Strong programming skills in languages such as Python, Scala, or SQL. Experience with cloud...Suggested
- .../CD pipeline) ~2 or more years of industry experience with big data applications deployed in public cloud (Informatica, AWS EMR, Spark, Datalake, CI/CD pipeline) It would be helpful in this position for you to demonstrate the following capabilities and distinctions...SuggestedContract workLocal areaFlexible hours
- ...~ Experience working with LLMs (OpenAI, Azure OpenAI, Claude, etc.) ~ Strong SQL and data modeling skills ~ Experience with Spark, Databricks, or similar big data technologies ~ Hands-on experience with cloud platforms (AWS/Azure/Google Cloud Platform)...SuggestedContract workRemote work
- ...Chicago, IL, Hybrid Pay: Available on W2 Basis Position Overview We are seeking a hands-on AWS Big Data Architect with strong Hadoop and Spark experience to design, develop, and support scalable Big Data Warehouse and Data Lake solutions. This role requires a highly...Permanent employmentContract workTemporary work
- ...environment. Key Responsibilities: Design, build, and maintain scalable data pipelines using Azure and Databricks (ADF, Spark, Delta Lake). Develop robust ETL/ELT processes to ingest and transform structured, semi-structured, and unstructured data....
- ...Platform ML Tech stack. Experienced with Infrastructure as Code (IaC). Experience with big data technologies such as Apache Spark or Hadoop. Stay informed about the ethical implications of machine learning eg: selection bias. Model Training Data...Contract work
- ...GPU computing (CUDA, NCCL). Knowledge of cluster schedulers (Slurm, Kubernetes schedulers). Experience with big data tools (Spark, Ray). Exposure to MLOps and experiment tracking tools. Key Competencies Strong problem-solving and systems...Full timeRemote work
- ...Generative AI / LLMs (OpenAI, Hugging Face, LangChain, etc.) ~ Proficiency in Python and experience with data processing tools (Pandas, Spark) ~ Experience designing distributed systems and cloud-native architectures ~ Hands-on experience with cloud platforms (AWS,...Relocation
- ...working experience in implementing scalable and efficient data processing pipelines using big data technologies, such as Hadoop/EMR, Spark, Hive. Strong development experience in AWS platforms/services such as Lambda, Eventbridge, Step Functions, Redshift, S3, Glue....Work experience placementH1bLocal area
- ...population scale Hands-on experience with LLM APIs and orchestration frameworks Strong SQL and experience with distributed data processing (Spark, Dask) Experience with vector databases and ANN search systems (FAISS, Pinecone, etc.) Expertise in ML lifecycle management (MLflow...Work experience placement
- ...function, Cloudrun, BigQuery, Dataflow, Pub/Sub, Composer, Dataproc, Vertex AI. Languages: Python, SQL Frameworks: Apache Beam, Apache Spark. Tools: Terraform, Git, Docker, Kubernetes. For applications and inquiries, contact: ****@*****.***...Remote work
$60 - $65 per hour
...development, or data architecture. Technical Proficiency: Advanced expertise in Python and SQL with significant experience in Apache Spark and modern Lakehouse architectures (Databricks preferred). AI/ML Expertise: Proven experience building MLOps pipelines and...Hourly payContract workTemporary workWork experience placement- ...Java, Google Analytics, BigQuery, Cassandra, Docker, Kubernetes, Kafka, in memory caching are a plus Familiarity with data manipulation and analysis libraries (e.g., Pandas, NumPy, Spark) is a plus. For applications and inquiries, contact: ****@*****.***...Remote workFlexible hours
- ...with ML/AI libraries such as PyTorch, TensorFlow, Hugging Face Transformers, or similar. Hands-on experience in Python, Pyspark/Spark, NO-SQL (MongoDB), SQL- PostgreSQL, Redis, Databricks, Delta Lake, Azure Cloud Experience productionizing AI/ML or LLM-powered...
$75 - $76 per hour
...Kubernetes (Strimzi Operator or Confluent Platform). Exposure to stream processing frameworks such as Kafka Streams, Apache Flink, or Spark Structured Streaming. location: Westlake, Texas job type: Contract salary: $75 - 76 per hour work hours: 8am...Hourly payContract workTemporary workWork experience placement- ...experience building and supporting data pipelines and large-scale data processing solutions. Strong experience with SQL, Python, and Spark for data transformation, processing, and analysis. Experience with Azure cloud data services and platform components, especially...
- ...across geographically distributed teams. Good to have: Experience with AI/ML Exposure to big data technologies (Spark, Kafka). Experience with API gateways and event-driven architecture. Knowledge of security best practices in cloud environments...
- ...with Generative AI, OpenAI APIs, LangChain, RAG Knowledge of MLOps and model deployment Experience with vector databases like Pinecone or FAISS Exposure to Apache Spark or Databricks For applications and inquiries, contact: ****@*****.***...