2024

A novel weighted approach for time series forecasting based on visibility graph
A novel weighted approach for time series forecasting based on visibility graph

Tianxiang Zhan, Fuyuan Xiao

Pattern Recognition 2024 中科院升级版1区 CCF B

Time series has attracted a lot of attention in many fields today. Time series forecasting algorithm based on complex network analysis is a research hotspot. How to use time series information to achieve more accurate forecasting is a problem. To solve this problem, this paper proposes a weighted network forecasting method to improve the forecasting accuracy. Firstly, the time series will be transformed into a complex network, and the similarity between nodes will be found. Then, the similarity will be used as a weight to make weighted forecasting on the predicted values produced by different nodes. Compared with the previous method, the proposed method is more accurate. In order to verify the effect of the proposed method, the experimental part is tested on M1, M3 datasets and Construction Cost Index (CCI) dataset, which shows that the proposed method has more accurate forecasting performance.

A novel weighted approach for time series forecasting based on visibility graph
A novel weighted approach for time series forecasting based on visibility graph

Tianxiang Zhan, Fuyuan Xiao

Pattern Recognition 2024 中科院升级版1区 CCF B

Time series has attracted a lot of attention in many fields today. Time series forecasting algorithm based on complex network analysis is a research hotspot. How to use time series information to achieve more accurate forecasting is a problem. To solve this problem, this paper proposes a weighted network forecasting method to improve the forecasting accuracy. Firstly, the time series will be transformed into a complex network, and the similarity between nodes will be found. Then, the similarity will be used as a weight to make weighted forecasting on the predicted values produced by different nodes. Compared with the previous method, the proposed method is more accurate. In order to verify the effect of the proposed method, the experimental part is tested on M1, M3 datasets and Construction Cost Index (CCI) dataset, which shows that the proposed method has more accurate forecasting performance.

Generalized information entropy and generalized information dimension
Generalized information entropy and generalized information dimension

Tianxiang Zhan, Jiefeng Zhou, Zhen Li, Yong Deng

Chaos, Solitons & Fractals 2024 中科院升级版1区

The concept of entropy has played a significant role in thermodynamics and information theory, and is also a current research hotspot. Information entropy, as a measure of information, has many different forms, such as Shannon entropy and Deng entropy, but there is no unified interpretation of information from a measurement perspective. To address this issue, this article proposes Generalized Information Entropy (GIE) that unifies entropies based on mass function. Meanwhile, GIE establishes the relationship between entropy, fractal dimension, and number of events. Therefore, Generalized Information Dimension (GID) has been proposed, which extends the definition of information dimension from probability to mass fusion. GIE plays a role in approximation calculation and coding systems. In the application of coding, information from the perspective of GIE exhibits a certain degree of particle nature that the same event can have different representational states, similar to the number of microscopic states in Boltzmann entropy.

Generalized information entropy and generalized information dimension
Generalized information entropy and generalized information dimension

Tianxiang Zhan, Jiefeng Zhou, Zhen Li, Yong Deng

Chaos, Solitons & Fractals 2024 中科院升级版1区

The concept of entropy has played a significant role in thermodynamics and information theory, and is also a current research hotspot. Information entropy, as a measure of information, has many different forms, such as Shannon entropy and Deng entropy, but there is no unified interpretation of information from a measurement perspective. To address this issue, this article proposes Generalized Information Entropy (GIE) that unifies entropies based on mass function. Meanwhile, GIE establishes the relationship between entropy, fractal dimension, and number of events. Therefore, Generalized Information Dimension (GID) has been proposed, which extends the definition of information dimension from probability to mass fusion. GIE plays a role in approximation calculation and coding systems. In the application of coding, information from the perspective of GIE exhibits a certain degree of particle nature that the same event can have different representational states, similar to the number of microscopic states in Boltzmann entropy.

Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting
Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting

Tianxiang Zhan, Yuanpeng He, Zhen Li, Yong Deng

Arxiv 2024 Preprint

In real-world scenarios, time series forecasting often demands timeliness, making research on model backbones a perennially hot topic. To meet these performance demands, we propose a novel backbone from the perspective of information fusion. Introducing the Basic Probability Assignment (BPA) Module and the Time Evidence Fusion Network (TEFN), based on evidence theory, allows us to achieve superior performance. On the other hand, the perspective of multi-source information fusion effectively improves the accuracy of forecasting. Due to the fact that BPA is generated by fuzzy theory, TEFN also has considerable interpretability. In real data experiments, the TEFN partially achieved state-of-the-art, with low errors comparable to PatchTST, and operating efficiency surpass performance models such as Dlinear. Meanwhile, TEFN has high robustness and small error fluctuations in the random hyperparameter selection. TEFN is not a model that achieves the ultimate in single aspect, but a model that balances performance, accuracy, stability, and interpretability.

Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting
Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting

Tianxiang Zhan, Yuanpeng He, Zhen Li, Yong Deng

Arxiv 2024 Preprint

In real-world scenarios, time series forecasting often demands timeliness, making research on model backbones a perennially hot topic. To meet these performance demands, we propose a novel backbone from the perspective of information fusion. Introducing the Basic Probability Assignment (BPA) Module and the Time Evidence Fusion Network (TEFN), based on evidence theory, allows us to achieve superior performance. On the other hand, the perspective of multi-source information fusion effectively improves the accuracy of forecasting. Due to the fact that BPA is generated by fuzzy theory, TEFN also has considerable interpretability. In real data experiments, the TEFN partially achieved state-of-the-art, with low errors comparable to PatchTST, and operating efficiency surpass performance models such as Dlinear. Meanwhile, TEFN has high robustness and small error fluctuations in the random hyperparameter selection. TEFN is not a model that achieves the ultimate in single aspect, but a model that balances performance, accuracy, stability, and interpretability.

Isopignistic Canonical Decomposition via Belief Evolution Network
Isopignistic Canonical Decomposition via Belief Evolution Network

Qianli Zhou, Tianxiang Zhan, Yong Deng

Arxiv 2024 Preprint

In wireless sensor networks (WSNs), coverage and deployment are two most crucial issues when conducting detection tasks. However, the detection information collected from sensors is oftentimes not fully utilized and efficiently integrated. Such sensing model and deployment strategy, thereby, cannot reach the maximum quality of coverage, particularly when the amount of sensors within WSNs expands significantly. In this article, we aim at achieving the optimal coverage quality of WSN deployment. We develop a collaborative sensing model of sensors to enhance detection capabilities of WSNs, by leveraging the collaborative information derived from the combination rule under the framework of evidence theory. In this model, the performance evaluation of evidential fusion systems is adopted as the criterion of the sensor selection. A learnable sensor deployment network (LSDNet) considering both sensor contribution and detection capability, is proposed for achieving the optimal deployment of WSNs. Moreover, we deeply investigate the algorithm for finding the requisite minimum number of sensors that realizes the full coverage of WSNs. A series of numerical examples, along with an application of forest area monitoring, are employed to demonstrate the effectiveness and the robustness of the proposed algorithms.

Isopignistic Canonical Decomposition via Belief Evolution Network
Isopignistic Canonical Decomposition via Belief Evolution Network

Qianli Zhou, Tianxiang Zhan, Yong Deng

Arxiv 2024 Preprint

In wireless sensor networks (WSNs), coverage and deployment are two most crucial issues when conducting detection tasks. However, the detection information collected from sensors is oftentimes not fully utilized and efficiently integrated. Such sensing model and deployment strategy, thereby, cannot reach the maximum quality of coverage, particularly when the amount of sensors within WSNs expands significantly. In this article, we aim at achieving the optimal coverage quality of WSN deployment. We develop a collaborative sensing model of sensors to enhance detection capabilities of WSNs, by leveraging the collaborative information derived from the combination rule under the framework of evidence theory. In this model, the performance evaluation of evidential fusion systems is adopted as the criterion of the sensor selection. A learnable sensor deployment network (LSDNet) considering both sensor contribution and detection capability, is proposed for achieving the optimal deployment of WSNs. Moreover, we deeply investigate the algorithm for finding the requisite minimum number of sensors that realizes the full coverage of WSNs. A series of numerical examples, along with an application of forest area monitoring, are employed to demonstrate the effectiveness and the robustness of the proposed algorithms.

Generalized Uncertainty-Based Evidential Fusion with Hybrid Multi-Head Attention for Weak-Supervised Temporal Action Localization
Generalized Uncertainty-Based Evidential Fusion with Hybrid Multi-Head Attention for Weak-Supervised Temporal Action Localization

Yuanpeng He, Lijian Li, Tianxiang Zhan, Wenpin Jiao, Chi-Man Pun

ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024 CCF B

Capturing feature information effectively is of great importance in the field of computer vision. With the development of convolutional neural networks, concepts like residual connection and multiple scales promote continual performance gains in diverse deep learning vision tasks. In this paper, novel residual feature-reutilization inception and split-residual feature-reutilization inception are proposed to improve performance on various vision tasks. It consists of four parallel branches, each with convolutional kernels of different sizes. These branches are interconnected by hierarchically organized channels, similar to residual connections, facilitating information exchange and rich dimensional variations at different levels. This structure enables the acquisition of features with varying granularity and effectively broadens the span of the receptive field in each network layer. Moreover, according to the network structure designed above, split-residual feature-reutilization inceptions can adjust the split ratio of the input information, thereby reducing the number of parameters and guaranteeing the model performance. Specifically, in image classification experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%), Tiny Imagenet (70.54%) and ImageNet (80.83%), we obtain state-of-the-art results compared with other modern models under the premise that the models’ sizes are approximate and no additional data is used.

Generalized Uncertainty-Based Evidential Fusion with Hybrid Multi-Head Attention for Weak-Supervised Temporal Action Localization
Generalized Uncertainty-Based Evidential Fusion with Hybrid Multi-Head Attention for Weak-Supervised Temporal Action Localization

Yuanpeng He, Lijian Li, Tianxiang Zhan, Wenpin Jiao, Chi-Man Pun

ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024 CCF B

Capturing feature information effectively is of great importance in the field of computer vision. With the development of convolutional neural networks, concepts like residual connection and multiple scales promote continual performance gains in diverse deep learning vision tasks. In this paper, novel residual feature-reutilization inception and split-residual feature-reutilization inception are proposed to improve performance on various vision tasks. It consists of four parallel branches, each with convolutional kernels of different sizes. These branches are interconnected by hierarchically organized channels, similar to residual connections, facilitating information exchange and rich dimensional variations at different levels. This structure enables the acquisition of features with varying granularity and effectively broadens the span of the receptive field in each network layer. Moreover, according to the network structure designed above, split-residual feature-reutilization inceptions can adjust the split ratio of the input information, thereby reducing the number of parameters and guaranteeing the model performance. Specifically, in image classification experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%), Tiny Imagenet (70.54%) and ImageNet (80.83%), we obtain state-of-the-art results compared with other modern models under the premise that the models’ sizes are approximate and no additional data is used.

Learnable WSN Deployment of Evidential Collaborative Sensing Model
Learnable WSN Deployment of Evidential Collaborative Sensing Model

Ruijie Liu, Tianxiang Zhan, Zhen Li, Yong Deng

Arxiv 2024 Preprint

In wireless sensor networks (WSNs), coverage and deployment are two most crucial issues when conducting detection tasks. However, the detection information collected from sensors is oftentimes not fully utilized and efficiently integrated. Such sensing model and deployment strategy, thereby, cannot reach the maximum quality of coverage, particularly when the amount of sensors within WSNs expands significantly. In this article, we aim at achieving the optimal coverage quality of WSN deployment. We develop a collaborative sensing model of sensors to enhance detection capabilities of WSNs, by leveraging the collaborative information derived from the combination rule under the framework of evidence theory. In this model, the performance evaluation of evidential fusion systems is adopted as the criterion of the sensor selection. A learnable sensor deployment network (LSDNet) considering both sensor contribution and detection capability, is proposed for achieving the optimal deployment of WSNs. Moreover, we deeply investigate the algorithm for finding the requisite minimum number of sensors that realizes the full coverage of WSNs. A series of numerical examples, along with an application of forest area monitoring, are employed to demonstrate the effectiveness and the robustness of the proposed algorithms.

Learnable WSN Deployment of Evidential Collaborative Sensing Model
Learnable WSN Deployment of Evidential Collaborative Sensing Model

Ruijie Liu, Tianxiang Zhan, Zhen Li, Yong Deng

Arxiv 2024 Preprint

In wireless sensor networks (WSNs), coverage and deployment are two most crucial issues when conducting detection tasks. However, the detection information collected from sensors is oftentimes not fully utilized and efficiently integrated. Such sensing model and deployment strategy, thereby, cannot reach the maximum quality of coverage, particularly when the amount of sensors within WSNs expands significantly. In this article, we aim at achieving the optimal coverage quality of WSN deployment. We develop a collaborative sensing model of sensors to enhance detection capabilities of WSNs, by leveraging the collaborative information derived from the combination rule under the framework of evidence theory. In this model, the performance evaluation of evidential fusion systems is adopted as the criterion of the sensor selection. A learnable sensor deployment network (LSDNet) considering both sensor contribution and detection capability, is proposed for achieving the optimal deployment of WSNs. Moreover, we deeply investigate the algorithm for finding the requisite minimum number of sensors that realizes the full coverage of WSNs. A series of numerical examples, along with an application of forest area monitoring, are employed to demonstrate the effectiveness and the robustness of the proposed algorithms.

Residual Feature-Reutilization Inception Network
Residual Feature-Reutilization Inception Network

Yuanpeng He, Wenjie Song, Lijian Li, Tianxiang Zhan, Wenpin Jiao

Pattern Recognition 2024 中科院升级版1区 CCF B

Capturing feature information effectively is of great importance in the field of computer vision. With the development of convolutional neural networks, concepts like residual connection and multiple scales promote continual performance gains in diverse deep learning vision tasks. In this paper, novel residual feature-reutilization inception and split-residual feature-reutilization inception are proposed to improve performance on various vision tasks. It consists of four parallel branches, each with convolutional kernels of different sizes. These branches are interconnected by hierarchically organized channels, similar to residual connections, facilitating information exchange and rich dimensional variations at different levels. This structure enables the acquisition of features with varying granularity and effectively broadens the span of the receptive field in each network layer. Moreover, according to the network structure designed above, split-residual feature-reutilization inceptions can adjust the split ratio of the input information, thereby reducing the number of parameters and guaranteeing the model performance. Specifically, in image classification experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%), Tiny Imagenet (70.54%) and ImageNet (80.83%), we obtain state-of-the-art results compared with other modern models under the premise that the models’ sizes are approximate and no additional data is used.

Residual Feature-Reutilization Inception Network
Residual Feature-Reutilization Inception Network

Yuanpeng He, Wenjie Song, Lijian Li, Tianxiang Zhan, Wenpin Jiao

Pattern Recognition 2024 中科院升级版1区 CCF B

Capturing feature information effectively is of great importance in the field of computer vision. With the development of convolutional neural networks, concepts like residual connection and multiple scales promote continual performance gains in diverse deep learning vision tasks. In this paper, novel residual feature-reutilization inception and split-residual feature-reutilization inception are proposed to improve performance on various vision tasks. It consists of four parallel branches, each with convolutional kernels of different sizes. These branches are interconnected by hierarchically organized channels, similar to residual connections, facilitating information exchange and rich dimensional variations at different levels. This structure enables the acquisition of features with varying granularity and effectively broadens the span of the receptive field in each network layer. Moreover, according to the network structure designed above, split-residual feature-reutilization inceptions can adjust the split ratio of the input information, thereby reducing the number of parameters and guaranteeing the model performance. Specifically, in image classification experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%), Tiny Imagenet (70.54%) and ImageNet (80.83%), we obtain state-of-the-art results compared with other modern models under the premise that the models’ sizes are approximate and no additional data is used.

Random Graph Set and Evidence Pattern Reasoning Model
Random Graph Set and Evidence Pattern Reasoning Model

Tianxiang Zhan, Zhen Li, Yong Deng

Arxiv 2024 Preprint

Evidence theory is widely used in decision-making and reasoning systems. In previous research, Transferable Belief Model (TBM) is a commonly used evidential decision making model, but TBM is a non-preference model. In order to better fit the decision making goals, the Evidence Pattern Reasoning Model (EPRM) is proposed. By defining pattern operators and decision making operators, corresponding preferences can be set for different tasks. Random Permutation Set (RPS) expands order information for evidence theory. It is hard for RPS to characterize the complex relationship between samples such as cycling, paralleling relationships. Therefore, Random Graph Set (RGS) were proposed to model complex relationships and represent more event types. In order to illustrate the significance of RGS and EPRM, an experiment of aircraft velocity ranking was designed and 10,000 cases were simulated. The implementation of EPRM called Conflict Resolution Decision optimized 18.17\% of the cases compared to Mean Velocity Decision, effectively improving the aircraft velocity ranking. EPRM provides a unified solution for evidence-based decision making.

Random Graph Set and Evidence Pattern Reasoning Model
Random Graph Set and Evidence Pattern Reasoning Model

Tianxiang Zhan, Zhen Li, Yong Deng

Arxiv 2024 Preprint

Evidence theory is widely used in decision-making and reasoning systems. In previous research, Transferable Belief Model (TBM) is a commonly used evidential decision making model, but TBM is a non-preference model. In order to better fit the decision making goals, the Evidence Pattern Reasoning Model (EPRM) is proposed. By defining pattern operators and decision making operators, corresponding preferences can be set for different tasks. Random Permutation Set (RPS) expands order information for evidence theory. It is hard for RPS to characterize the complex relationship between samples such as cycling, paralleling relationships. Therefore, Random Graph Set (RGS) were proposed to model complex relationships and represent more event types. In order to illustrate the significance of RGS and EPRM, an experiment of aircraft velocity ranking was designed and 10,000 cases were simulated. The implementation of EPRM called Conflict Resolution Decision optimized 18.17\% of the cases compared to Mean Velocity Decision, effectively improving the aircraft velocity ranking. EPRM provides a unified solution for evidence-based decision making.

SIMULATED AIRCRAFT TRAJECTORY FOR THEORETICAL VELOCITY RANKING
SIMULATED AIRCRAFT TRAJECTORY FOR THEORETICAL VELOCITY RANKING

Tianxiang Zhan

IEEE DataPort 2024 Preprint

This data set is a data set used for aircraft theoretical velocity ranking. Four sensors are randomly arranged in a 1*1 square map, and three aircraft will fly over the map coverage area at the same time. The velocity of the aircraft is simulated by a random process. The theoretical velocities of the three aircraft are similar, and the velocity of the aircraft will be disturbed during actual flight, causing large fluctuations, so that it is difficult to distinguish the theoretical velocity order of the aircraft flying into the map. The coverage area of the sensor is circular with a fixed radius. The four sensors have a unified detection interval event and will detect the position of the aircraft within the coverage area with unified accuracy. The target task is to reason the theoretical velocity ranking of three aircraft through the trajectory data collected by the sensors.

SIMULATED AIRCRAFT TRAJECTORY FOR THEORETICAL VELOCITY RANKING
SIMULATED AIRCRAFT TRAJECTORY FOR THEORETICAL VELOCITY RANKING

Tianxiang Zhan

IEEE DataPort 2024 Preprint

This data set is a data set used for aircraft theoretical velocity ranking. Four sensors are randomly arranged in a 1*1 square map, and three aircraft will fly over the map coverage area at the same time. The velocity of the aircraft is simulated by a random process. The theoretical velocities of the three aircraft are similar, and the velocity of the aircraft will be disturbed during actual flight, causing large fluctuations, so that it is difficult to distinguish the theoretical velocity order of the aircraft flying into the map. The coverage area of the sensor is circular with a fixed radius. The four sensors have a unified detection interval event and will detect the position of the aircraft within the coverage area with unified accuracy. The target task is to reason the theoretical velocity ranking of three aircraft through the trajectory data collected by the sensors.

2023

Differential Convolutional Fuzzy Time Series Forecasting
Differential Convolutional Fuzzy Time Series Forecasting

Tianxiang Zhan, Yuanpeng He, Zhen Li, Yong Deng

IEEE Transactions on Fuzzy Systems 2023 中科院升级版1区 CCF B

Fuzzy time series forecasting (FTSF) is a typical forecasting method with wide application. Traditional FTSF is regarded as an expert system, which leads to the loss of the ability to recognize undefined features. The mentioned is the main reason for poor forecasting with FTSF. To solve the problem, the proposed model differential fuzzy convolutional neural network (DFCNN) utilizes a convolution neural network to reimplement FTSF with learnable ability. DFCNN is capable of recognizing potential information and improving forecasting accuracy. Thanks to the learnable ability of the neural network, the length of fuzzy rules established in FTSF is expended to an arbitrary length that the expert is not able to handle by the expert system. At the same time, FTSF usually cannot achieve satisfactory performance of nonstationary time series due to the trend of nonstationary time series. The trend of nonstationary time series causes the fuzzy set established by FTSF to be invalid and causes the forecasting to fail. DFCNN utilizes the difference algorithm to weaken the nonstationary time series so that DFCNN can forecast the nonstationary time series with a low error that FTSF cannot forecast in satisfactory performance. After the mass of experiments, DFCNN has an excellent prediction effect, which is ahead of the existing FTSF and common time series forecasting algorithms. Finally, DFCNN provides further ideas for improving FTSF and holds continued research value.

Differential Convolutional Fuzzy Time Series Forecasting
Differential Convolutional Fuzzy Time Series Forecasting

Tianxiang Zhan, Yuanpeng He, Zhen Li, Yong Deng

IEEE Transactions on Fuzzy Systems 2023 中科院升级版1区 CCF B

Fuzzy time series forecasting (FTSF) is a typical forecasting method with wide application. Traditional FTSF is regarded as an expert system, which leads to the loss of the ability to recognize undefined features. The mentioned is the main reason for poor forecasting with FTSF. To solve the problem, the proposed model differential fuzzy convolutional neural network (DFCNN) utilizes a convolution neural network to reimplement FTSF with learnable ability. DFCNN is capable of recognizing potential information and improving forecasting accuracy. Thanks to the learnable ability of the neural network, the length of fuzzy rules established in FTSF is expended to an arbitrary length that the expert is not able to handle by the expert system. At the same time, FTSF usually cannot achieve satisfactory performance of nonstationary time series due to the trend of nonstationary time series. The trend of nonstationary time series causes the fuzzy set established by FTSF to be invalid and causes the forecasting to fail. DFCNN utilizes the difference algorithm to weaken the nonstationary time series so that DFCNN can forecast the nonstationary time series with a low error that FTSF cannot forecast in satisfactory performance. After the mass of experiments, DFCNN has an excellent prediction effect, which is ahead of the existing FTSF and common time series forecasting algorithms. Finally, DFCNN provides further ideas for improving FTSF and holds continued research value.

2021

DVS: Deep Visibility Series and its Application in Construction Cost Index Forecasting
DVS: Deep Visibility Series and its Application in Construction Cost Index Forecasting

Tianxiang Zhan, Yuanpeng He, Hanwen Li, Fuyuan Xiao

Arxiv 2021 Preprint

Time series forecasting is a hot spot in recent years. Visibility Graph (VG) algorithm is used for time series forecasting in previous research, but the forecasting effect is not as good as deep learning prediction methods such as methods based on Artificial Neural Network (ANN), Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM). The visibility graph generated from specific time series contains abundant network information, but the previous forecasting method did not effectively use the network information to forecast, resulting in relatively large prediction errors. To optimize the forecasting method based on VG, this article proposes the Deep Visibility Series (DVS) module through the bionic design of VG and the expansion of the past research. By applying the bionic design of biological vision to VG, DVS has obtained superior forecasting accuracy. At the same time, this paper applies the DVS forecasting method to the construction cost index forecast, which has practical significance.

DVS: Deep Visibility Series and its Application in Construction Cost Index Forecasting
DVS: Deep Visibility Series and its Application in Construction Cost Index Forecasting

Tianxiang Zhan, Yuanpeng He, Hanwen Li, Fuyuan Xiao

Arxiv 2021 Preprint

Time series forecasting is a hot spot in recent years. Visibility Graph (VG) algorithm is used for time series forecasting in previous research, but the forecasting effect is not as good as deep learning prediction methods such as methods based on Artificial Neural Network (ANN), Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM). The visibility graph generated from specific time series contains abundant network information, but the previous forecasting method did not effectively use the network information to forecast, resulting in relatively large prediction errors. To optimize the forecasting method based on VG, this article proposes the Deep Visibility Series (DVS) module through the bionic design of VG and the expansion of the past research. By applying the bionic design of biological vision to VG, DVS has obtained superior forecasting accuracy. At the same time, this paper applies the DVS forecasting method to the construction cost index forecast, which has practical significance.

Construction Cost Index Forecasting: A Multi-feature Fusion Approach
Construction Cost Index Forecasting: A Multi-feature Fusion Approach

Tianxiang Zhan, Yuanpeng He, Fuyuan Xiao

Arxiv 2021 Preprint

The construction cost index is an important indicator of the construction industry. Predicting CCI has important practical significance. This paper combines information fusion with machine learning, and proposes a multi-feature fusion (MFF) module for time series forecasting. The main contribution of MFF is to improve the prediction accuracy of CCI, and propose a feature fusion framework for time series. Compared with the convolution module, the MFF module is a module that extracts certain features. Experiments have proved that the combination of MFF module and multi-layer perceptron has a relatively good prediction effect. The MFF neural network model has high prediction accuracy and prediction efficiency, which is a study of continuous attention.

Construction Cost Index Forecasting: A Multi-feature Fusion Approach
Construction Cost Index Forecasting: A Multi-feature Fusion Approach

Tianxiang Zhan, Yuanpeng He, Fuyuan Xiao

Arxiv 2021 Preprint

The construction cost index is an important indicator of the construction industry. Predicting CCI has important practical significance. This paper combines information fusion with machine learning, and proposes a multi-feature fusion (MFF) module for time series forecasting. The main contribution of MFF is to improve the prediction accuracy of CCI, and propose a feature fusion framework for time series. Compared with the convolution module, the MFF module is a module that extracts certain features. Experiments have proved that the combination of MFF module and multi-layer perceptron has a relatively good prediction effect. The MFF neural network model has high prediction accuracy and prediction efficiency, which is a study of continuous attention.

A fast evidential approach for stock forecasting
A fast evidential approach for stock forecasting

Tianxiang Zhan, Fuyuan Xiao

International Journal of Intelligent Systems 2021 中科院升级版2区 CCF C

Within the framework of evidence theory, the confidence functions of different information can be combined into a combined confidence function to solve uncertain problems. The Dempster combination rule is a classic method of fusing different information. This paper proposes a similar confidence function for the time point in the time series. The Dempster combination rule can be used to fuse the growth rate of the last time point, and finally a relatively accurate forecast data can be obtained. Stock price forecasting is a concern of economics. The stock price data is large in volume, and more accurate forecasts are required at the same time. The classic methods of time series, such as ARIMA, cannot balance forecasting efficiency and forecasting accuracy at the same time. In this paper, the fusion method of evidence theory is applied to stock price prediction. Evidence theory deals with the uncertainty of stock price prediction and improves the accuracy of prediction. At the same time, the fusion method of evidence theory has low time complexity and fast prediction processing speed.

A fast evidential approach for stock forecasting
A fast evidential approach for stock forecasting

Tianxiang Zhan, Fuyuan Xiao

International Journal of Intelligent Systems 2021 中科院升级版2区 CCF C

Within the framework of evidence theory, the confidence functions of different information can be combined into a combined confidence function to solve uncertain problems. The Dempster combination rule is a classic method of fusing different information. This paper proposes a similar confidence function for the time point in the time series. The Dempster combination rule can be used to fuse the growth rate of the last time point, and finally a relatively accurate forecast data can be obtained. Stock price forecasting is a concern of economics. The stock price data is large in volume, and more accurate forecasts are required at the same time. The classic methods of time series, such as ARIMA, cannot balance forecasting efficiency and forecasting accuracy at the same time. In this paper, the fusion method of evidence theory is applied to stock price prediction. Evidence theory deals with the uncertainty of stock price prediction and improves the accuracy of prediction. At the same time, the fusion method of evidence theory has low time complexity and fast prediction processing speed.

Uncertainty Measurement of Basic Probability Assignment Integrity Based on Approximate Entropy in Evidence Theory
Uncertainty Measurement of Basic Probability Assignment Integrity Based on Approximate Entropy in Evidence Theory

Tianxiang Zhan, Yuanpeng He, Hanwen Li, Fuyuan Xiao

Arxiv 2021 Preprint

Evidence theory is that the extension of probability can better deal with unknowns and inaccurate information. Uncertainty measurement plays a vital role in both evidence theory and probability theory. Approximate Entropy (ApEn) is proposed by Pincus to describe the irregularities of complex systems. The more irregular the time series, the greater the approximate entropy. The ApEn of the network represents the ability of a network to generate new nodes, or the possibility of undiscovered nodes. Through the association of network characteristics and basic probability assignment (BPA) , a measure of the uncertainty of BPA regarding completeness can be obtained. The main contribution of paper is to define the integrity of the basic probability assignment then the approximate entropy of the BPA is proposed to measure the uncertainty of the integrity of the BPA. The proposed method is based on the logical network structure to calculate the uncertainty of BPA in evidence theory. The uncertainty based on the proposed method represents the uncertainty of integrity of BPA and contributes to the identification of the credibility of BPA.

Uncertainty Measurement of Basic Probability Assignment Integrity Based on Approximate Entropy in Evidence Theory
Uncertainty Measurement of Basic Probability Assignment Integrity Based on Approximate Entropy in Evidence Theory

Tianxiang Zhan, Yuanpeng He, Hanwen Li, Fuyuan Xiao

Arxiv 2021 Preprint

Evidence theory is that the extension of probability can better deal with unknowns and inaccurate information. Uncertainty measurement plays a vital role in both evidence theory and probability theory. Approximate Entropy (ApEn) is proposed by Pincus to describe the irregularities of complex systems. The more irregular the time series, the greater the approximate entropy. The ApEn of the network represents the ability of a network to generate new nodes, or the possibility of undiscovered nodes. Through the association of network characteristics and basic probability assignment (BPA) , a measure of the uncertainty of BPA regarding completeness can be obtained. The main contribution of paper is to define the integrity of the basic probability assignment then the approximate entropy of the BPA is proposed to measure the uncertainty of the integrity of the BPA. The proposed method is based on the logical network structure to calculate the uncertainty of BPA in evidence theory. The uncertainty based on the proposed method represents the uncertainty of integrity of BPA and contributes to the identification of the credibility of BPA.

Quantum soft likelihood function based on ordered weighted average operator
Quantum soft likelihood function based on ordered weighted average operator

Tianxiang Zhan, Yuanpeng He, Fuyuan Xiao

Arxiv 2021 Preprint

Quantum theory is the focus of current research. Likelihood functions are widely used in many fields. Because the classic likelihood functions are too strict for extreme data in practical applications, Yager proposed soft ordered weighted average (OWA) operator. In the quantum method, probability is represented by Euler's function. How to establish a connection between quantum theory and OWA is also an open question. This article proposes OWA opreator under quantum theory, and discusses the relationship between quantum soft OWA operater and classical soft OWA operator through some examples. Similar to other quantum models, this research has more extensive applications in quantum information.

Quantum soft likelihood function based on ordered weighted average operator
Quantum soft likelihood function based on ordered weighted average operator

Tianxiang Zhan, Yuanpeng He, Fuyuan Xiao

Arxiv 2021 Preprint

Quantum theory is the focus of current research. Likelihood functions are widely used in many fields. Because the classic likelihood functions are too strict for extreme data in practical applications, Yager proposed soft ordered weighted average (OWA) operator. In the quantum method, probability is represented by Euler's function. How to establish a connection between quantum theory and OWA is also an open question. This article proposes OWA opreator under quantum theory, and discusses the relationship between quantum soft OWA operater and classical soft OWA operator through some examples. Similar to other quantum models, this research has more extensive applications in quantum information.