SGN-34006 3D and Virtual Reality, 5 cr
Suitable for postgraduate studies.
|Passing three lab exercises and one project assignment.|
The course targets new and emerging technologies for 3D visual scene modelling, capture, processing and visualization as well as their use in applications such as 3D video and virtual reality. The course will provide in-depth knowledge about the creation, processing, delivery and visualization of 3D moving scenes. After the course, the students will know the basics of 3D computer graphics and binocular vision, and how those basics are taken into account while capturing and processing 3D video. The students will be able to design application-specific multi-camera and multi-sensor capture systems and 3D graphics visualization tools. They will know the principles and the state-of-the-art techniques for compression and transmission of multi-modal signals used for conveying 3D visual information. The students will know the state-of-the-art and emerging visualization techniques, such as auto-stereoscopic, light-field, holographic, head-mounted and haptic-augmented displays. The students will have knowledge about typical visual artifacts appearing during the stage of 3D content creation, compression and transmission and about the advanced signal processing methods which can tackle those artifacts. 3D tracking sensors: mechanical, optical, magnetic, audio, etc. will be overviewed and basics of augmented and virtual reality will be presented.
|Content||Core content||Complementary knowledge||Specialist knowledge|
|1.||Geometric data types. Linear and affine transformations.||Transform matrices. Rotations and translations. Frames.||Interpolation.|
|2.||Cameras and rasterization. Depth. Verteces and pixels. Color. Raytracing.||Materials and texture mapping.Geometrical models.||Sampling and reconstruction. Animation.|
|3.||Binocular vision; perception of depth. 3D visual cues.||Vergence and accommodation conflict. Focus cues.||Visual processing paths|
|4.||3D scene capture: multi-camera, multi-sensor approaches.||Calibration and rectification of multi-camera setting with different topologies. Depth estimation.||Capture-specific 3D visual artifacts|
|5.||3D scene representation formats: multi-view + multi-depth; layered depth; epipolar image; point-clouds and meshes. Copression of 3D imagery.||Format conversion methods. Multi-modal 3D scene compression.||Representation-specific and compression-specific 3D visual artifacts.|
|6.||Immersive displays: 3D, light field VR and AR displays.||Display-specific artifacts||Quality measurements for 3D displays.|
|7.||3D tracking sensors: mechanical, optical, magnetic, audio. Motion capture.||Augmented reality||Virtual reality|
Instructions for students on how to achieve the learning outcomes
Three lab works and one project assignment should be completed. Course grade is computed as follows: firt lab work - 20%; second lab work - 20%; third lab work - 20%; completed project - 40%. Total grade=0.2L1+0.2L2+0.2L3+0.4P. All four units must be passed otherwise final grade is not given. There is no exam.
Numerical evaluation scale (0-5)
|SGN-12007 Introduction to Image and Video Processing||Advisable|
Additional information about prerequisites
Either SGN-12000, SGN-12001, SGN-12006 or SGN-12007 is a prerequisite.
Correspondence of content
|SGN-34006 3D and Virtual Reality, 5 cr||SGN-5456 3D Media Technology, 4 cr|