Session: Research Posters
Paper Number: 173474
Finnet: A High-Fidelity, Multi-Scale and Light-Weight Deep Learning Neural Network for Predicting Phonon Energy and Thermal Distribution in Finfets Transistors
The size of single transistor geometries in modern integrated circuits (ICs) continues to shrink, driven by Moore’s Law, which has precipitated a significant rise in on-chip power densities. In FinFET architectures, with fin widths and heights approaching tens of nanometers, localized Joule heating can generate extreme thermal gradients that may damage device performance, accelerate degradation mechanisms (e.g., negative-bias temperature instability, time-dependent dielectric breakdown, electromigration), and shorten operational lifetimes. While the phonon Boltzmann Transport Equation (BTE) provides a first-principles framework for capturing non-equilibrium and ballistic heat transport across micro- and nano-scale domains, its seven-dimensional integro-differential form (3 spatial dimensions + 3 phonon momentum dimensions + time) computes a direct numerical solution, which is computationally expensive for practical IC design workflows. Even state-of-the-art, GPU-accelerated solvers such as JAX-BTE require minutes to hours per single-transistor simulation mesh, making full-chip thermal analysis infeasible. Addressing this critical gap, we introduce FinNET, an AI-driven surrogate modeling framework that delivers high-fidelity, multi-scale predictions of both steady-state and transient phonon energy and temperature distributions in FinFET transistors, which significantly decrease the computational cost of direct BTE solvers.
FinNET’s core innovation is a modular “encoder–operator–decoder” paradigm, inspired by the flexibility of LEGO® building blocks. First, a spatial encoder 𝜙 (a spectral-normalized multilayer perceptron with skip connections) projects geometry information (𝑥,𝑦,𝑧) into a compact d-dimensional latent representation. zoc = 𝜙 (𝑥,𝑦,𝑧). Second, a latent thermal operator L(h) conditions the spatial code on the heat-source intensity ℎ, implementing the mapping 𝑧cond = ℎ 𝑧loc that captures the dependence of thermal magnitude on input power. Third, two task-specific decoders 𝑔𝑇, and 𝑔𝐸 map the conditioned latent space to either the converged temperature or the 32-dimensional phonon energy vector. By decoupling geometry, thermal conditioning, and output modalities, FinNET provides an explainable, extensible architecture that can be adapted to future needs without retraining the entire model.
To extend FinNET into the transient state prediction, we introduce a learnable temporal aggregation operator Aθ, which intake the sequence of per-pulse impulses and generated by applying task decoders at each discrete time step. Rather than decode the whole temporal impulse responses, the aggregator is implemented as a lightweight convolutional neural network that implicitly learns the convolutional kernel governing heat diffusion and distribution. This formulation avoids expensive high-dimensional matrix operations and dramatically reduces memory consumption during inference.
We train and validate FinNET on datasets generated by JAX-BTE for a single FinFET transistor with a silicon substrate, discretized into approximately 50000 finite-volume cells and 32 angular slices. The dataset comprises 100 steady-state simulations under varying heat-source intensities and 100 transient sequences spanning 4 ns at 0.4 ns intervals. FinNET achieves predictive accuracies of 95.3 %–99.9 % relative to JAX-BTE ground truth, trains in under 30 minutes on an NVIDIA A100 GPU, and completes thermal/phonon field inference in just a few seconds per query. These performances facilitate real-time exploration, which enables potential in the web-based cloud computing software capability.
Our modular design provides straightforward pathways for future enhancements: a geometric operator S(α), S(α) can map the base spatial encoder to a continuous family of FinFET shapes; multi-material embeddings can be introduced by augmenting the encoder inputs with local thermal conductivities; and meta-learning or adaptive sampling strategies can further reduce data requirements by targeting high-error regions. By bridging physics-based fidelity with AI-driven efficiency, FinNET advances the frontier of surrogate modeling for nanoscale thermal analysis, providing a scalable and practical tool for proactive thermal management in next-generation semiconductor systems
Presenting Author: Jasmine Liang University of Notre Dame
Presenting Author Biography: Jasmine is a research associate at Notre Dame AI and SciML, with a PhD in Computer Science and Biomech from Iowa State University. Her research interests focus on machine learning applied to a variety of sequential data, e.g., time-series, spatiotemporal, and NLP, with applications spanning healthcare, biomechanics, sensors, and engineering. Known for contributions to integrating AI and thermal distribution prediction on transistors and IMUs to understand motions.
Authors:
Jasmine Liang University of Notre DameWenjie Shang University of Notre Dame
Yi Liu University of Notre Dame
Jyoti Panda University of Notre Dame
Jiahang Zhou University of Notre Dame
Ben-Chi Ma University of Notre Dame
Jian-Xun Wang University of Notre Dame
Tengfei Luo University of Notre Dame
Finnet: A High-Fidelity, Multi-Scale and Light-Weight Deep Learning Neural Network for Predicting Phonon Energy and Thermal Distribution in Finfets Transistors
Paper Type
Poster Presentation
