HPC / AI Specialist

AI4I · WorkFromHome, Piemonte, Italia · · 50€ - 70€


Descrizione dell'offerta

The Italian Institute of Artificial Intelligence (AI4I) is seeking a hands‑on HPC / AI Specialist to support and optimize the compute infrastructure powering the AI Foundry and Deployment activities.

At AI4I, you will work on a state-of-the-art AI computing environment built with best-in-class technologies acquired over the past year, including next-generation GPU systems such as NVIDIA B200 accelerators and high-performance distributed storage solutions such as VAST. This infrastructure is designed to support AI training, fine‑tuning, and inference workloads for both research and industrial deployment.

Leonardo hosts the physical hardware infrastructure and delivers agreed infrastructure services in partnership with AI4I. In this role, you will focus on optimizing performance and providing direct technical support to internal users and clients running AI workloads, while contributing to the continuous evolution and improvement of the system design.

You will act as a key interface between the machine infrastructure and the teams executing AI workflows, ensuring efficient, stable, and predictable operations.

Location: AI4I, OGR – Turin, Italy

Hybrid work: Flexible arrangements may be negotiated.

The position will remain open until it is filled and multiple candidates may be hired.

About The Role

As HPC / AI Specialist at AI4I, you will operate at the intersection of infrastructure operations and applied AI execution. You will ensure that engineers, researchers, and deployment teams can efficiently run training, fine‑tuning, inference, and data‑intensive pipelines on shared compute resources.

This is a cross‑unit role shared 50% with the new Deployment Unit, working closely with both infrastructure and client‑facing teams.

You Will Work Closely With

  • AI engineers and ML / GenAI teams running training, fine‑tuning, and inference workloads
  • Cloud / DevOps engineers operating the private cloud
  • The Deployment Unit supporting industrial AI clients
  • Hardware vendors and infrastructure partners

Key Responsibilities

  • Operate and maintain Linux‑based HPC clusters supporting AI training, fine‑tuning, and inference workloads
  • Manage GPU and CPU compute environments, including workload scheduling, resource isolation, and performance tuning
  • Support distributed and software‑defined storage systems used for large‑scale datasets
  • Act as the primary technical interface between infrastructure operations and internal or external users running AI workflows
  • Provide hands‑on technical support for AI workload optimization, including distributed training and parameter‑efficient fine‑tuning of foundation models on HPC infrastructure
  • Support foundation model fine‑tuning workflows (including parameter‑efficient approaches), including configuration of data pipelines, checkpoints, runtime settings, and GPU memory optimization in HPC environments
  • Optimize resource utilization and workload performance across multi‑tenant environments
  • Support containerised workloads running on shared compute infrastructure
  • Monitor system health, performance, and capacity; troubleshoot user‑facing production issues
  • Contribute to the continuous improvement and evolution of system architecture in collaboration with infrastructure teams
  • Support internal users and clients with debugging, environment set‑up, and best practices for scalable AI execution

Required Qualifications

  • Strong Linux system administration experience in production environments
  • Solid background in CPU and GPU architectures and performance characteristics
  • Experience operating HPC clusters or large‑scale compute environments
  • Hands‑on experience with distributed and software‑defined storage systems (e.g., VAST or equivalent)
  • Experience with workload managers and job schedulers (e.g., Slurm or equivalent)
  • Experience troubleshooting performance bottlenecks in compute or storage environments
  • Practical understanding of AI training and fine‑tuning workloads, including GPU memory management, batch sizing strategies, distributed execution constraints, checkpointing, and data pipeline performance in HPC or large‑scale compute environments
  • Scripting and automation skills (Bash, Python, or equivalent)
  • Experience supporting shared infrastructure with uptime and operational responsibility

Additional Strengths

  • Experience supporting AI / ML training workloads in production environments
  • Experience with parameter‑efficient fine‑tuning workflows and runtime optimisation in shared HPC environments
  • Familiarity with foundation model adaptation workflows and large‑scale training constraints
  • Familiarity with containerised execution environments
  • Experience operating multi‑tenant compute environments
  • Experience with monitoring and observability systems
  • Networking fundamentals for high‑throughput environments
  • Experience collaborating with engineering or deployment teams in production settings

Key Performance Metrics

  • Cluster availability and operational stability
  • GPU and CPU utilisation efficiency
  • Workload performance and scheduling effectivenessTime required to debug and resolve user issues
  • Time required to onboard new workloads and users

What We Offer

  • A collaborative environment with engineers and researchers working on real industrial AI deployments
  • Direct impact: your infrastructure will run daily AI workloads and production systems
  • An office at the epicentre of tech: OGR Torino technology hub
  • Competitive compensation and access to advanced computing infrastructure

How To Apply

Submit your application exclusively through the online form:

  • Cover letter (max. 1 page) describing how your profile fits this specific position
  • CV and optional links to technical projects or operational experience

About Us

The Italian Institute for Artificial Intelligence (AI4I) was founded as a research institute to perform transformative, application‑oriented research in Artificial Intelligence, driving innovation and industrial progress. The Institute is designed to engage and empower gifted, entrepreneurial, and ambitious researchers who are committed to generating real‑world impact at the intersection of science, technology, and industrial transformation.

Competitive salaries, performance‑based incentives, access to dedicated high‑performance computing resources, state‑of‑the‑art laboratories, and strong industrial collaborations are among the distinctive features that define AI4I. The Institute fosters a dynamic international environment and an ecosystem that supports the creation and growth of innovative startups.

AI4I’s mission is to advance scientific research, technology transfer, and, more broadly, Italy’s innovation capacity, promoting positive impact across industry, services, and public administration. To achieve this, the Institute contributes to building a research and innovation infrastructure that leverages AI methods, with a special focus on manufacturing processes and the broader Industry 4.0 value chain.

AI4I also maintains strategic relationships with leading organisations in Italy and abroad, including Competence Centers and European Digital Innovation Hubs (EDIHs), positioning itself as an attractive destination for researchers, companies, and start‑ups seeking collaboration and impact.

#J-18808-Ljbffr

Candidatura e Ritorno (in fondo)