οƵ

>

Graph2Video: Leveraging Video Models to Model Dynamic Graph Evolution

Liu, Hua; Wei, Yanbin; Xing, Fei; Derr, Tyler; Han, Haoyu; Zhang, Yu (2026)..Proceedings of the AAAI Conference on Artificial Intelligence, 40(18), 15315–15323.

Many real-world systems—like social networks, recommendation platforms, and traffic systems—can be represented as dynamic graphs, where connections between entities change over time. Predicting future connections (link prediction) in these systems is challenging because existing models often miss subtle changes in interaction order, struggle to capture long-term patterns, and do not fully account for how specific pairs of nodes relate to each other over time.

To address this, the authors propose Graph2Video, a new approach inspired by how videos are processed in computer vision. Instead of looking at a graph as a static snapshot, the method treats the evolving neighborhood around a potential connection as a sequence of “graph frames” (like frames in a video). These frames are stacked into a “graph video,” allowing the model to capture both short-term changes and long-term trends using techniques originally developed for video analysis. The method then creates a compact representation (an embedding, or numerical summary) for each potential link, which can be easily integrated into existing models.

Experiments show that Graph2Video improves prediction accuracy compared to current leading methods. Overall, the study demonstrates that adapting ideas from video processing is an effective way to better understand and predict how connections evolve in complex, time-changing networks

.

Explore Story Topics