VULGARIZED NEIGHBOURING NETWORK OF MULTIVARIATE AUTOREGRESSIVE PROCESSES WITH GAUSSIAN AND STUDENT-T DISTRIBUTED RANDOM NOISES

This paper introduces the vulgarized network autoregressive process with Gaussian and Student-t random noises. The processes relate the time-varying series of a given variable to the immediate past of the same phenomenon with the inclusion of its neighboring variables and networking structure. The g...

Full description

Bibliographic Details
Main Authors: Rasaki Olawale Olanrewaju, Ravi Prakash Ranjan, Queensley C. Chukwudum, Sodiq Adejare Olanrewaju
Format: Article
Language:English
Published: UiTM Press 2023-10-01
Series:Malaysian Journal of Computing
Subjects:
Online Access:https://mjoc.uitm.edu.my/main/images/journal/vol8-2-2023/9_VULGARIZED_NEIGHBOURING_NETWORK_OF_MULTIVARIATE_AUTOREGRESSIVE_PROCESSES_OF_WITH_STUDENT-T_DISTRI.pdf
Description
Summary:This paper introduces the vulgarized network autoregressive process with Gaussian and Student-t random noises. The processes relate the time-varying series of a given variable to the immediate past of the same phenomenon with the inclusion of its neighboring variables and networking structure. The generalized network autoregressive process would be fully spelt-out to contain the aforementioned random noises with their embedded parameters (the autoregressive coefficients, networking nodes, and neighboring nodes) and subjected to monthly prices of ten (10) edible cereals. Global-α of Generalized Network Autoregressive (GNAR) of order lag two, the neighbor at the time lags two and the neighbourhood nodal of zero, that is GNAR (2, [2,0]) was the ideal generalization for both Gaussian and student-t random noises for the prices of cereals, a model with two autoregressive parameters and network regression parameters on the first two neighbor sets at time lag one. GNAR model with student-t random noise produced the smallest BIC of -39.2298 compared to a BIC of -18.1683 by GNAR by Gaussian. The residual error via Gaussian was 0.9900 compared to the one of 0.9000 by student-t. Additionally, GNAR MSE for error of forecasting via student-t was 15.105% less than that of the Gaussian. Similarly, student-t-GNAR MSE for VAR was 1.59% less than that of the Gaussian-GNAR MSE for VAR. Comparing the fitted histogram plots of both the student-t and Gaussian processes, the two histograms produced a symmetric residual estimate for the fitted GNAR model via student-t and Gaussian processes respectively, but the residuals via the student-t were more evenly symmetric than those of the Gaussian. In a contribution to the network autoregressive process, the GNAR process with Student-t random noise generalization should always be favored over Gaussian random noise because of its ability to absolve contaminations, spread, and ability to contain time-varying network measurements.
ISSN:2600-8238