The ONIVOA (Computational Photography and Augmented Vision) platform benefits from a transdisciplinary team and develops a global approach to digital imaging (space, time and spectrum). It brings together knowledge in materials physics, optics and perception, to build a reliable and operational ground truth, or boundary conditions to validate methods and algorithms for image processing and analysis.
Its objective is to propose robust and efficient imaging methods to:
- Offer innovative, high-level and creative teaching by making students aware of the existing and future relationships between the digital world and the real, measurable and perceptible world.
- Present a complete range of services to meet the ever-increasing needs of economic actors in terms of appropriation of tools from a booming digital world in which the concept of imaging is becoming multiple (temporal and distributed acquisition, medical imaging, hyperspectral imaging, virtual and augmented reality...)."
- High-speed and/or high-spatial resolution cameras
- Pulsed and continuous wave lasers
- Digital holography and speckle interferometry
- Visible-range spectroradiometer (Minolta CS2000)
- Capture of movement (Mo’Cap)
- Virtual reality headsets (HTC Vive and Razer OSVR)
Activities and fields of application
- Collaborative research
- Technical and scientific expertise
- Transdisciplinary approach
In the fields of:
- Medicine, rehabilitation
- Safety & security
Scientific expertise & Knowhow
The platform has developed a wide range of skills and activities in the service of collaborative research and teaching:
- Temporal and distributed acquisition
- Medical imaging
- Multispectral imaging (formation, capture, recording, restitution)
- Virtual reality, augmented reality, construction of virtual spaces with perceptual rendering
- Computer vision (filtering, segmentation, monitoring and recognition of objects, 3D perception)
- Characterisation of effect materials
- Functional low vision rehabilitation tools
- Static or drone-mounted optical metrology
- Fast digital imaging (detection of objects, tracking)
- Perspective immersion
Development of enhanced vision glasses, visual protheses, technical assistance for visually impaired people.
Video cameras film the environment and send the information to an embedded computer (smartphone). The video is processed in real time and displayed on the head mounted device. The image brightness must be kept in the comfortable luminance range for each person and it should highlight the information required for moving around (detection of obstacles). The parameters of the software used can be set in function of the visually impaired person’s particular visual condition.
Evaluation of the impact strength of synthetically reinforced (glass fibres, carbon) plastic composite materials and naturally reinforced (flax fibres, hemp) biocomposites.
High-speed imaging and image processing applications were used to measure surface strains in real time during a shock and to monitor the cracking and subsequent perforation of composite and biocomposite plates
The aim of the project is to completely control the visual appearance of materials, controlled and reproduced at different scales of observation, by developing a set of physico-mathematical formulation models. The appearance of a manufacturable material is connected to its physical composition by means of a virtual workshop. Materials are designed virtually in order to reduce environmental costs.