Research Engineer Position on Secure Agentic AI Systems
Descrizione dell'offerta
The Italian Institute of Artificial Intelligence (AI4I) is an Institute that aims to enhance scientific research, technological transfer, and, more generally, the innovation capacity of the Country, promoting its positive impact on industry, services and public administration. To this end, the Institute contributes to creating a research and innovation infrastructure that employs artificial intelligence methods, with particular reference to manufacturing processes, within the framework of the Industry 4.0 process and its entire value chain. The Institute establishes relationships with similar entities and organizations in Italy and abroad, including Competence Centers and European Digital Innovation Hubs (EDIHs).
Are you looking to support the future of industrial innovation?
The AI Security Lab is looking for a creative and highly motivated Research Engineer to join our founding team and help build the next generation of secure agentic AI systems through practical implementation of cutting‑edge security solutions.
Location
AI4I, OGR, Turin, Italy
Job Description
As a research engineer, you will be instrumental in designing and implementing our end‑to‑end security platform that enables secure AI deployment at scale. This position offers the unique opportunity to architect secure AI solutions from first principles, translating theoretical security concepts into production‑ready systems. Your work will focus on creating foundational infrastructure for AI red‑teaming, secure agent execution environments, verification protocols, and continuous monitoring frameworks that protect AI systems during runtime and safeguard data throughout processing, storage, and transfer. Working alongside security researchers and engineers, you'll bridge the gap between frontier research and practical deployment, ensuring that advanced AI agents can operate securely in real‑world environments.
Key Responsibilities
- Design and implement scalable security platforms for AI agents and large language model workloads, including secure execution environments and runtime protection mechanisms.
- Conduct proactive red team exercises simulating external adversaries and insider threats to identify and remediate vulnerabilities in agentic AI solutions.
- Develop and deploy defenses against LLM‑specific threats including prompt injection, task hijacking, model extraction, and data leakage attacks.
- Build security validation frameworks and compliance certification tools to support secure system deployments for internal teams and pilot partners.
- Collaborate with research teams to translate novel security findings into production‑ready implementations and open‑source security tools.
About You
You hold a Master’s or PhD in Computer Science, Engineering, or a related field with focus on security, systems, or machine learning and you have hands‑on experience with modern ML frameworks including PyTorch, Hugging Face, JAX, or TensorFlow for deploying and securing AI workloads. Strong programming skills in Python and proficiency in at least one additional language such as C++, Rust, or JavaScript/Typescript is mandatory.
Preferred Qualifications
- Experience conducting penetration testing, vulnerability assessments, security architecture reviews, or threat modeling for complex systems.
- Expertise with trusted execution environments (TEEs), containerization technologies (Docker/Kubernetes), CI/CD pipelines, and cloud platforms (GCP/AWS/Azure).
- Background in optimizing AI model serving infrastructure, scaling inference workloads, or deploying models in production with security considerations.
- Deep knowledge of AI‑specific security threats including prompt injection attacks, LLM red‑teaming methodologies, jailbreaking techniques, and privacy‑preserving ML methods.
- Experience with GPU cluster management and orchestration for secure AI workload deployment.
Why Join AI4I?
- A pioneering research team: You will work alongside a highly talented and collaborative team of security researchers and engineers who share your passion for advancing AI safety and security. We foster an environment of innovation and mutual support, with clear pathways for career advancement and technical leadership.
- Research impact and visibility: We are committed to advancing both practical security solutions and fundamental research. You will have opportunities to publish at top‑tier venues, while also contributing to national and European industrial research initiatives that shape the future of secure AI.
- Prime location at OGR Torino: Our offices are situated at OGR Torino, the city’s leading technology and innovation hub. You’ll be immersed in Italy’s vibrant tech ecosystem with access to countless events, meetups, and a dynamic community of innovators and entrepreneurs.
- Comprehensive support and resources: We provide competitive compensation packages and full support for conference travel and professional development. You’ll have access to state‑of‑the‑art high‑performance computing infrastructure and GPU clusters essential for conducting cutting‑edge AI security research.
- Salary range: 30,000€ – 50,000€ plus bonus, gross per year, depending on experience. (Engineers relocating from abroad may be eligible for tax exemptions of up to 50%.)
Start Date
Flexible, as soon as possible.
Application Requirements
- Cover letter (max. 1 page) describing how your background aligns with this specific position and outlining your research interests and professional goals in AI security.
- CV including your publication record and links to open-source contributions, code repositories (e.g., GitHub), or research prototypes.
AI4I is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr