MLOps Engineer
Descrizione dell'offerta
MLOps Engineer
South Tyrol, Italy (On-site / Hybrid)
European Tech Recruit are partnering with a fast-scaling, venture-backed AI company operating at the intersection of computer vision, 3D data, and generative AI . This organization is redefining how digital content is created and deployed at scale, working with global enterprise brands. With significant growth and increasing demand, they are investing heavily in their data and ML infrastructure , making this a key hire to support large-scale training and production workflows.
The Role
As an MLOps Engineer, you will be responsible for building and scaling the infrastructure that underpins the company’s machine learning and data pipelines. This is a highly hands-on role focused on enabling ML teams to train, experiment, and deploy models efficiently at scale , working closely with data scientists, ML engineers, and platform teams.
You will operate across the full lifecycle—from data ingestion and versioning to training pipelines, experiment tracking, and production readiness.
Key Responsibilities
- Design and manage data infrastructure for large-scale datasets (multi-terabyte image and training data)
- Build and optimize data pipelines and data movement strategies for distributed training workflows
- Implement dataset versioning and lineage tracking to ensure reproducibility
- Develop and maintain ML pipelines for preprocessing, training, validation, and deployment
- Set up and manage experiment tracking and model registry systems (e.g., MLflow, Weights & Biases)
- Support distributed training across multi-GPU environments and optimize performance bottlenecks
- Improve training efficiency and system performance (data loading, GPU utilization, batching)
- Manage artifact storage, containerization, and CI/CD workflows
- Build internal tooling to improve developer productivity and ML workflow efficiency
Required Experience & Skills
- 3+ years of experience in MLOps, DataOps, or ML platform engineering roles
- Strong experience with data infrastructure and large-scale data handling (object storage, distributed file systems)
- Hands-on experience with ML pipeline orchestration tools (Airflow, Prefect, Kubeflow, or similar)
- Experience with experiment tracking and model lifecycle tools (MLflow, W&B, Neptune)
- Solid understanding of distributed training frameworks and workflows
- Strong experience with Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Proficiency in Python and environment/dependency management
- Strong Linux fundamentals and systems-level understanding
Why Apply?
- Join a company working on cutting-edge AI and visual computing problems
- Play a critical role in scaling ML systems from research to production
- Work in a high-impact, engineering-driven environment with real-world applications
- Strong compensation, flexibility, and clear growth trajectory
Recruiter’s Note
This role is ideal for engineers who enjoy building the backbone of machine learning systems - not just models, but the infrastructure that makes them scalable, reproducible, and production-ready. If you’ve worked on data pipelines, ML platforms, or large-scale training systems , and enjoy solving complex systems challenges, this could be a strong fit.
If this role is of any interest please apply directly via this advert now or send a copy of your CV, referencing the job title and location, and with a short intro to .
By applying to this role you understand that we may collect your personal data and store and process it on our systems. For more information please see our Privacy Notice ( )