Session: Government Agency Student Posters
Paper Number: 173057
Electrospinning Process Monitoring From Multi-Streaming Videos
Electrospinning is a versatile and powerful technique for producing nanofibers, with many applications, including advances in biomedical scaffolds, filtration systems, and energy materials. Despite its promise, the electrospinning process remains highly sensitive to variations in voltage, flow rate, and environmental conditions, which often result in inconsistent jet formation and fiber morphology. Current quality control methods typically rely on post-process analysis, which limits responsiveness to dynamic instabilities during operation. To overcome this limitation, we propose a real-time monitoring framework that combines multi-view video acquisition, geometric feature extraction, tensor decomposition, and statistical anomaly detection to assess jet axisymmetry and overall process stability during electrospinning. Our system utilizes two synchronized camera streams – capturing both the side-view and top-view angles – to observe the electrospinning jet in real-time. Each frame is decomposed into its respective angles and segmented for feature extraction. This process extricates key visual features and details from the two camera streams, such as jet angle, cone width, and symmetry from both views, as well as centerline coordinates and area. These features are then structured into a time-series tensor that represents the evolving geometry of the jet and Taylor cone. Real-time monitoring of the electrospinning process presents a significant challenge due to the inherent high dimensionality of the streaming data, which makes modeling computationally demanding. To address this, we developed a streamlined framework that performs feature extraction, dimensionality reduction, and anomaly detection on-the-fly, enabling efficient monitoring without sacrificing interpretability. To expand on this, we reveal the underlying structure in this high-dimensional feature space through Canonical Polyadic (CP) decomposition, reducing the data into low-rank factors that approximate the original behavior. Each camera captures video at 22.4 frames per second (FPS), while our real-time framework can process data at an average of 29.0 FPS, including the feature extraction, CP decomposition, and anomaly detection. Since our processing speed exceeds the camera input rate, the system achieves near real-time monitoring without frame loss, despite operating on high-dimensional features. The reconstructed tensor is then compared to the original, and the reconstruction error is computed on a per-frame basis. We apply an Exponentially Weighted Moving Average (EWMA) control chart to track temporal shifts in the joint reconstruction error across both camera views. To complement this, we also implement a bootstrap-based Hotelling’s T² chart to perform rigorous joint analysis and enhance sensitivity to multivariate anomalies across the feature space. Our visualization suite includes real-time plots of EWMA and T² statistics, anomaly magnitudes by feature group (i.e., centerline, side geometry, top geometry), and comparisons of actual versus reconstructed feature trajectories. We also assess jet axisymmetry by plotting side-top differences and scatter correlations. A ring buffer stores recent frames to perform CP decomposition and anomaly detection in moving windows. On test hardware (Intel Core i7-12700H CPU, 16 GB RAM), each inference window processed a fixed batch of 50 frames, spanning 2.23 seconds of real-time video feed (based on the 22.4 FPS camera input). Each inference step – including feature extraction, CP decomposition, EWMA calculation, T² analysis, visualization updates, etc. – was completed in under 1.75 seconds on average which is reliably ahead of its incoming data stream. Also, for initial testing of this framework model, we have uploaded videos of both camera angles, with results on videos with no known visual anomalies (e.g., 12kV, 1.0 mL/hr, 15 cm needle-to-collector distance) showing stable reconstruction with low error for most features, while fault cases yield statistically significant deviations in both jet angle and symmetry features. This framework integrates domain-specific feature extraction with low-rank tensor decomposition to create a modular, interpretable, and real-time monitoring system for electrospinning. Our contributions not only provide a powerful tool for monitoring jet stability and axisymmetry but also lay the groundwork for integrating closed-loop control mechanisms into next-generation electrospinning platforms. Future work will explore extensions to more complex fiber morphologies, 3D printing systems, and integration with deep learning models for hybrid physical-statistical modeling.
Presenting Author: Raahil Pattan University of Louisville
Presenting Author Biography: Raahil Pattan is a junior for the 2025-2026 academic year. He is an ECE undergraduate student at Purdue. He worked on this Electrospinning project during the summer of 2025 through an REU program at UofL. As of now, he does have plans to pursue both a Master's and a PhD in the future.
Authors:
Raahil Pattan University of LouisvillePablo Zuniga University of Louisville
Luis Segura University of Louisville
Electrospinning Process Monitoring From Multi-Streaming Videos
Paper Type
Government Agency Student Poster Presentation
