Activity
Time:
M.S. Thesis Defense: Max Nguyen
Date:
11:00 am –
12:00 pm
Schorr Center
Room: 211
1100 T St
Lincoln NE 68588
Lincoln NE 68588
Additional Info: SHOR
Virtual Location:
Zoom
Target Audiences:
“OpNet: Pixel Synthesis Neural Network for Interactive Volume Visualization”
Volume visualization systems responsive to the user’s data-dependent operations, like changing the color and opacity transfer functions, can significantly enhance the efficiency of uncovering critical intrinsic patterns within the volumetric data. However, existing volume visualization systems often require re-execution of the entire visualization pipeline whenever the transfer functions are altered, resulting in substantial computational overhead and hindering real-time interactivity. In this work, we proposed a pixel synthesis neural network, OpNet, to directly predict the pixel results in constant time by jointly considering the data, viewing parameters, and transfer functions. Our approach decouples the data and color/opacity mapping from the compositing process of ray casting by learning features from data and transfer functions separately. This design enables efficient rendering for modified transfer functions by inferring only a subset of the network, thereby significantly reducing input latency. Furthermore, OpNet extracts a latent representation from rays to accurately model pixel-level similarities, and leverages superpixel rendering through ray grouping to further optimize the rendering performance. Experimental results demonstrate that OpNet achieves superior rendering latency with high rendering quality compared to traditional GPU-accelerated ray casting and state-of-the-art generative image synthesis methods, offering a promising solution for real-time, interactive volume visualization.
Committee:
Dr. Hongfeng Yu, Advisor
Dr. Lisong Xu
Dr. Huijing Du
Volume visualization systems responsive to the user’s data-dependent operations, like changing the color and opacity transfer functions, can significantly enhance the efficiency of uncovering critical intrinsic patterns within the volumetric data. However, existing volume visualization systems often require re-execution of the entire visualization pipeline whenever the transfer functions are altered, resulting in substantial computational overhead and hindering real-time interactivity. In this work, we proposed a pixel synthesis neural network, OpNet, to directly predict the pixel results in constant time by jointly considering the data, viewing parameters, and transfer functions. Our approach decouples the data and color/opacity mapping from the compositing process of ray casting by learning features from data and transfer functions separately. This design enables efficient rendering for modified transfer functions by inferring only a subset of the network, thereby significantly reducing input latency. Furthermore, OpNet extracts a latent representation from rays to accurately model pixel-level similarities, and leverages superpixel rendering through ray grouping to further optimize the rendering performance. Experimental results demonstrate that OpNet achieves superior rendering latency with high rendering quality compared to traditional GPU-accelerated ray casting and state-of-the-art generative image synthesis methods, offering a promising solution for real-time, interactive volume visualization.
Committee:
Dr. Hongfeng Yu, Advisor
Dr. Lisong Xu
Dr. Huijing Du
Download this event to my calendar
This event originated in School of Computing.