All events are in Central time unless specified.
Presentation

Ph.D. Dissertation Defense: Jianxin Sun

Date:
Time:
10:00 am – 12:00 pm
Zoom Room: https://us02web.zoom.us/j/85650025988
Directions: Meeting ID: 856 5002 5988
Target Audiences:
“Interactive Volume Visualization of Large-scale Scientific Data Modeled by Functional Approximation”

Abstract: 3D volume rendering is widely employed to reveal insightful intrinsic patterns of scientific volumetric datasets across many domains. However, scientific datasets are often characterized by their substantial size and multidimensional nature, resulting in complex structures and diverse scales and posing challenges in efficiently producing high-quality volume rendering outcomes. My dissertation work aims to improve the quality and performance of volume rendering and study their interplay by holistically considering multiple aspects of a volume rendering pipeline, such as data representations, visualization algorithms, and computing system support.

My initial focus is on exploring how data presentations using functional approximation can improve the quality and performance of the volume visualization compared to traditional local filters. Multivariate functional approximation (MFA) is utilized to provide a more accurate evaluation of high-order values and derivatives throughout the spatial domain, mitigating artifacts associated with zero- or first-order interpolation. Subsequently, a direct volume rendering pipeline based on MFA (MFA-DVR) is developed to enhance rendering accuracy through decoding the continuous model encoded by functional approximation.

Second, although MFA demonstrates the capability to produce rendering outcomes of increased precision, the relatively slow query time of a large MFA model limits its scalability for interactive visualization of large datasets. My follow-up work is to exploit system supports, particularly massively distributed computing power, and develop the first scalable interactive volume visualization pipeline for MFA models derived from extensive datasets.

Third, I enhance the efficiency of interactive visualization systems by addressing the issue of high input latency stemming from I/O bottlenecks and limited fast memory resources prone to high cache miss rates. To ensure a seamless user experience, I have proposed a deep learning-based prefetching method to predict both the location and likelihood distribution of the next view, which can improve the prefetching range and thus optimize the data flow across the memory hierarchy to reduce input latency during large-scale volume visualization.

Lastly, I explore the fusion of functional approximation and multi-resolution to refine interactive visualization further, aiming for superior rendering quality while maintaining swift input responsiveness. Specifically, I design an adaptive encoding approach to effectively compress a large number of micro-blocks derived from large-scale datasets into compact continuous micro-models using functional approximation. Subsequently, I develop a GPU-accelerated out-of-core multi-resolution framework to directly generate visualization outcomes from these micro-models with interactive input responsiveness. Our approach achieves higher rendering fidelity while retaining responsiveness by leveraging GPU acceleration.

The frameworks I have developed in my research have practical implications for scientists working with large-scale volumetric datasets to efficiently and effectively discover important features, leading to possible new scientific insights. In the future, these frameworks can be adapted to newer data representations, further improving the quality and performance of visualization systems, keeping pace with more advanced hardware architectures, and enhancing the relevance and applicability of visualization techniques in the academic and scientific community.

Committee members:
Hongfeng Yu (Chair)
Stephen Scott
Lisong Xu
Yufeng Ge
Tom Peterka (External committee member)

Download this event to my calendar

This event originated in School of Computing.