Available positions

Available positions

This page list a number of available temporary positions (internships, PhD, postdocs, engineer) at IRISA / Inria Rennes-Bretagne Atlantique laboratories.

General description

Positions, unless otherwise specified, will be available in Rennes, which is the capital of Brittany and the tenth largest city in France, with a metropolitan area of about 720,000 inhabitants. Moreover, with more than 66,000 students, Rennes is also the eighth-largest university campus of France and it has the 2nd highest concentration of digital and ICT firms in France after Paris. Rennes is also known to be one of the most festive and lively cities of France, home of several music and culture festivals. In 2018, the newspaper “L'Express” named Rennes as "the most liveable city in France".

The research center Inria Rennes - Bretagne Atlantique, established in 1980, is fully integrated into a rich and powerful regional ecosystem, boosting strong partnerships with the best actors of research and innovation in Brittany and Pays de Loire. It comprises 3 sites (Rennes, Nantes, Lannion), 730 people, 34 research teams, 6 ERC grant winners, 31 European projects, 9 startups.

The new recruits will join the Inria's Rainbow team (https://team.inria.fr/rainbow), internationally recognized in the robotics, virtual reality, and haptics research fields. Currently, the team is composed by more than 30 members working in topics related to human-computer interaction, physical simulation, virtual reality, haptics, and visual servoing.

Topic #1: Robotics

Project Manager for the EU H2020 ICT 25 CROWDBOT project

Type of positions: Engineer

Starting: starting from september 2019

Supervisor: Julien Pettré

Related Project: This Project Manager Position is framed under the EU H2020 ICT 25 CROWDBOT project. CROWDBOT aims to enable robots to navigate in a dense crowd: to develop the capabilities on board the robot to perceive and analyze a moving crowd, as well as the robots' reasoning and decision-making skills to enable them to control their navigation in this very complex environment. Safety and ethical issues are at the heart of the CROWDBOT project. Thus the project also explores the means of testing, validating and evaluating robotic navigation techniques adapted to crowds.

The project consortium brings together 7 international partners, academic or industrial, from different countries, such as ETHZ and EPFL in Switzerland, UCL in the United Kingdom, RWTH in Germany, SoftBank Robotics in France. The project is coordinated and managed by INRIA in Rennes, France. The project started on 1 January 2018 and runs until 31 June 2021. CROWDBOT is currently looking for its project manager! This position offers the opportunity to participate in an ambitious and unique research project, in an international context in collaboration with the best partners in Robotics in Europe.

Summary: The CROWDBOT project manager is in charge of:

More concretely, this position mixes project management activities but also provides for a technical contribution. Depending on their exact expertise, we expect the candidate to be able to use their technical skills to strengthen the existing teams.

Skills: The candidates should have excellent: organization, management and communication skills. Technical skills are required in at least one of the following field: software architecture, robotics, robotics navigation, localization and mapping, crowd simulation, Virtual Reality.

How to apply? Please send an email to mail: julien.pettre@inria.fr with all the required elements to expose your applications: contact information, resume, motivation for the position, reference letters or contacts for references, etc.

Topic #2: Virtual Humans

Interactive Virtual Humans for Virtual Reality applications

Type of positions: PhD or Postdoc

Starting: starting from september 2019

Advisors: Julien Pettré, Ludovic Hoyet , Anne-Hélène Olivier, Claudio Pacchierotti

Related Project: These positions are framed under the EU H2020 ICT 25 PRESENT project. PRESENT aims at creating virtual digital companions -- embodied agents -- that look entirely naturalistic, demonstrate emotional sensitivity, can establish meaningful dialogue, add sense to the experience, and act as trustworthy guardians and guides in the interfaces for AR, VR and more traditional forms of media. There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and, above all, to interact with other people. We interact with today's "Intelligent Personal Assistants" primarily by voice. However, communication is episodic, based on a request-response model; the user does not see the assistant, which cannot take advantage of visual and emotional clues or evolve over time. Nonetheless, advances in the real-time creation of photorealistic computer-generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour.
This international partnership includes the Oscar-winning VFX company Framestore; technology developers Brainstorm, Cubic Motion and IKinema; Europe’s largest certification authority InfoCert; research groups from Universitat Pompeu Fabra, Universität Augsburg, Inria, and the pioneers of immersive virtual reality performance CREW.

Summary: This research aims at developing a new generation of virtual humans endowed with levels of behavioural sensitivity and responsiveness for Virtual Reality applications. By populating virtual environments with such characters, our goal is to achieve new levels of immersive experiences, by reinforcing the feeling of presence through non-verbal communication between a user and one or more agents.
We will investigate situations involving 1-to-n interactions between a user and multiple (virtual) characters. In such scenarios, the first step will be to define the full list of events or user actions that agents should be reacting to. This will be done by also detailing the virtual sensory channels by which agents perceive actions or events, e.g., if we want agents to react to a loud sound, a flashlight, or after being touched by the user.
We will also establish a list of expressive motions and emotions agents will be able to perform, defining their vocabulary for non-verbal communication (NVC) capabilities, such as expressing: annoyance, impatience, anger, fear, etc. These motions and emotions will be formulated on a set of variables to modulate their level, and make them dependent on the triggering source, such as the position of the user, the location of an event in the environment, or the state of the neighbouring agents to simulate emotion propagation phenomena.
We will explore motion capture data edition techniques that offer a good trade-off between the naturalness of motion and performance. However, pre-recorded motions are not sufficient since the reactive motions are depending on the features (location in the environment, level, etc.) of the user actions or events that triggered it.
In addition to motion, which will be conveyed through visual sensory channels to the user, we will also explore the tactile sensory channel to render the environment (and, more importantly, the contacts made between the user and the characters) to the user with two objectives in mind: make the user aware of physical contacts (that will affect user behaviours, e.g., to avoid collisions) and convey voluntary NVC messages to the user (e.g., shoulder tapping). In this respect, this raises several issues, which the task will study such as: the type of haptic rendering devices we will use, their number and location on user bodies (in the case of wearable haptic devices) and the contact rendering technique.

Skills for PhD application: The candidate must have a master degree (or equivalent), with a preference for computer science, virtual reality, or computer graphics. The candidate should also be comfortable with as much following items as possible: Experience in computer graphics, physical simulation, haptics, and animation; Experience in 3D/VR applications (e.g., Unity3D); Experience in carrying out principled user studies; Good knowledge of programming languages and tools (e.g., C#, git); Good spoken and written English; Good communication skills. This PhD is framed under a larger project, thus the candidate will have to interact with other members of the project and assist to the project meetings.

Skills for Postdoc application: The candidate must have a PhD degree in Computer Sciences, in the field of Computer Animation, Computer Graphics or Virtual Reality. We will also consider candidates in the field of Experimental psychology with expertise on non-verbal communication and bodily interactions. Beyond scientific excellence, we will consider candidates with excellent organization and communication skills.

How to apply? Please send an email to mail: julien.pettre@inria.fr with all the required elements to expose your applications: contact information, resume, motivation for the position, reference letters or contacts for references, etc.

Topic #3: Crowd Analysis

Large-scale Crowd Motion Analysis

Type of positions: PhD, Postdoc

Starting: starting anytime from now

Advisors: Julien Pettré

Summary: Large crowd gatherings, such as cultural or sporting events, pose operational public management challenges for the authorities in charge. This research aims to develop tools for analysing crowd behaviour in order to help those in charge of public places to better assess potential risks. To this end, our objective is to implement techniques for estimating and monitoring the movement of the crowd, and to conduct a real-time analysis to provide relevant information to the authorities in order to make a quantitative description of the state of the crowd.
In this context, the objectives of this research are threefold: 1) to first explore the use of crowd motion estimation techniques in challenging conditions (lighting conditions, scene complexity, video quality, etc.) with real-time performances requirements for the on-line analysis of the crowd motion; 2) to explore modal analysis of the estimated crowd motion as a basis to analyse the mechanisms of the crowd motion and 3) to extend the modal analysis from the spatial domain to the temporal domain to further analyse crowd motion.

Skills for PhD application: The candidate must have a master degree (or equivalent) in the field of computer science or physics. The candidate should also be comfortable with as much following items as possible: computer vision, physical simulation; Good knowledge of programming languages and tools (e.g., C++, git); Good spoken and written English; Good communication skills.

Skills for Postdoc application: The candidate must have a PhD degree in Computer Sciences, in the field of Computer Vision, Computer Graphics or Simulation. Beyond scientific excellence, we will consider candidates with excellent organization and communication skills.

Microscopic analysis and modeling of crowds

Type of positions: PhD, Postdoc

Starting: starting anytime from now

Advisors: Julien Pettré

Summary: Microscopic crowd simulation techniques can compute large crowd motions by detailing and modeling the behaviour of each of its individual member, as well as the interactions he has with his neighbors and the environment. In the aim of better understanding those local interactions and individual behavious in a collective context, we have realized a number of laboratory experimentss where we have asked subjects to walk in groups, in various situations of interest. Their individual motion has been recorded.
The objective of this research is

Skills for PhD application: The candidate must have a master degree (or equivalent) in the field of computer science or physics. The candidate should also be comfortable with as much following items as possible: computer vision, physical simulation; Good knowledge of programming languages and tools (e.g., C++, git); Good spoken and written English; Good communication skills.

Skills for Postdoc application: The candidate must have a PhD degree in Computer Sciences, in the field of Computer Vision, Computer Graphics or Simulation. Beyond scientific excellence, we will consider candidates with excellent organization and communication skills.