Can Multimodal LLMs Perform Time Series Anomaly Detection?

1Illinois Institute of Technology, 2Emory University, 3University of Illinois Chicago, 4University of Southern California
teaser

Left: workflow of VisualTimeAnomaly. Right: the performance comparison across various setting.

Abstract

Large language models (LLMs) have been increasingly used in time series analysis. However, the potential of multimodal LLMs (MLLMs), particularly vision-language models, for time series remains largely under-explored. One natural way for humans to detect time series anomalies is through visualization and textual description. Motivated by this, we raise a critical and practical research question: Can multimodal LLMs perform time series anomaly detection?

To answer this, we propose the VisualTimeAnomaly benchmark to evaluate MLLMs in time series anomaly detection (TSAD). Our approach transforms time series numerical data into the image format and feed these images into various MLLMs, including proprietary models (GPT-4o and Gemini-1.5) and open-source models (LLaVA-NeXT and Qwen2-VL), each with one larger and one smaller variant. In total, VisualTimeAnomaly contains 12.4k time series images spanning 3 scenarios and 3 anomaly granularities with 9 anomaly types across 8 MLLMs. Starting with the univariate case (point- and range-wise anomalies), we extend our evaluation to more practical scenarios, including multivariate and irregular time series scenarios, and variate-wise anomalies. Our study reveals several key insights:

    1) MLLMs detect range- and variate-wise anomalies more effectively than point-wise anomalies;

    2) MLLMs are highly robust to irregular time series, even with 25% of the data missing;

    3) open-source MLLMs perform comparably to proprietary models in TSAD. While open-source MLLMs excel on univariate time series, proprietary MLLMs demonstrate superior effectiveness on multivariate time series.

Finally, we discuss the broader implications of our findings for time series analysis in the era of MLLMs. We release our dataset and code at HERE to support future research.

BibTeX

@article{xu2025can,
  title={Can Multimodal LLMs Perform Time Series Anomaly Detection?},
  author={Xu, Xiongxiao and Wang, Haoran and Liang, Yueqing and Yu, Philip S and Zhao, Yue and Shu, Kai},
  journal={arXiv preprint arXiv:2502.17812},
  year={2025}
}

Beyond Numbers: Advancing Time Series Analysis in the Era of Multimodal LLMs

1Illinois Institute of Technology, 2University of Southern California, 3University of Illinois Chicago, 4Emory University
teaser

Time series, traditionally represented as a temporally ordered sequence of numbers, can be flexibly expressed across diverse modalities, including text, images, graphs, audio, and tables

Abstract

The rapid advancements in Multimodal Large Language Models (MLLMs) have garnered significant research attention, revolutionizing various domains, including time series analysis. Notably, time series data can be represented in diverse modalities, making it highly compatible with the progress of MLLMs. This survey provides a comprehensive overview of time series analysis in the era of multimodal LLMs. We systematically summarize existing work from two perspectives: data (taxonomy of time series modalities) and models (taxonomy of multimodal LLMs). From the data perspective, we emphasize that time series, traditionally represented as a sequence of numbers with temporal order, can also be expressed in modalities such as text, images, graphs, audio, and tables. From the model perspective, we explore MLLMs that are either applicable or hold potential for specific time series modalities. Finally, we identify future research directions and key challenges at the intersection of time series and MLLMs, including the video modality, reasoning, agents, interpretability, and hallucination. To support ongoing research, we maintain a GitHub repository to track the latest developments in this rapidly evolving field at HERE.

BibTeX

Arxiv will be coming very soon.