-
Analyzing the Impact of COVID 19 on Global Trends and Predicting Future Cases
Gyeongseung Han,
Jeewon Han,
Seungmin Han,
Hyunkyung Jeong
Issue:
Volume 10, Issue 1, January 2021
Pages:
1-8
Received:
21 November 2020
Accepted:
22 December 2020
Published:
12 January 2021
Abstract: COVID-19 is a highly contagious, potentially lethal respiratory disease caused by a strain of coronavirus. Utilizing sets of data collected by Johns Hopkins University, this research paper analyzes the global trends of COVID-19 and the pandemic’s effect on the global economy. The aim of this paper is to provide accurate information of COVID-19 through comparing the situation of the world before and after the pandemic. All visual representations have been created using python, a programming language, and each figure is accompanied with a thorough breakdown of its cumulative data. Its analysis shows the impact of COVID-19 globally and regionally. Machine learning is utilized to predict the future trends in the number of cases, from which it can be forecasted that the world will see a continuous increase in COVID-19 cases with an exception of a few countries where cases of COVID-19 have been declining consistently. Polynomial regression has been used to predict the future trend of COVID-19. Observing the numbers used in this paper such as card usage, job posts, and confirmed cases provides evidence to the fact that COVID-19 has negatively impacted the global economy. Statistics can show the relationship between COVID-19 and the global economy, nonetheless providing evidence on why certain events are occuring during this pandemic.
Abstract: COVID-19 is a highly contagious, potentially lethal respiratory disease caused by a strain of coronavirus. Utilizing sets of data collected by Johns Hopkins University, this research paper analyzes the global trends of COVID-19 and the pandemic’s effect on the global economy. The aim of this paper is to provide accurate information of COVID-19 thro...
Show More
-
Theoretical Properties of New Error Innovation Distribution on GARCH Model
Olayemi Michael Sunday,
Olubiyi Adenike Oluwafunmilola
Issue:
Volume 10, Issue 1, January 2021
Pages:
9-13
Received:
21 December 2020
Accepted:
31 December 2020
Published:
12 January 2021
Abstract: In the last decades, many error innovations have been introduced based on different modification techniques. One of the vital methods in estimating the true parameter of any volatility models is error innovation distribution, since volatility is affected by reaction from the stock market because of political recession, insecurity, constant power failure, war, political disorder, and other economic crises. In modelling of volatility in a financial investment, error innovation distribution was found advantageous. In this paper, the researcher provided a new error innovation distribution that will serve as a competitive to other existing error innovation. The theoretical properties of the standardized exponentiated Gumbel error innovation distribution is provided and the method of estimating its parameters, by maximum likelihood estimator was proposed. The exponentiated Gumbel distribution were standardized and then converted to the new error innovation through the method of transformation. The newly established error innovation which was obtained through the method of transformation in econometrics was applied on Generalized Autoregressive Conditional Heteroskedasticity (GARCH 1,1) model. For the partial derivative of the shape and volatility parameters were unable to get the exact solution of the parameters. Therefore, a method of numerical solution BFGS was applied to obtain the estimated values of the parameters.
Abstract: In the last decades, many error innovations have been introduced based on different modification techniques. One of the vital methods in estimating the true parameter of any volatility models is error innovation distribution, since volatility is affected by reaction from the stock market because of political recession, insecurity, constant power fa...
Show More
-
Maximum Likelihood Estimation for a Progressively Type II Censored Generalized Inverted Exponential Distribution via EM Algorithm
Karuoya Grace Njeri,
Edward Gachangi Njenga
Issue:
Volume 10, Issue 1, January 2021
Pages:
14-21
Received:
20 December 2020
Accepted:
11 January 2021
Published:
22 January 2021
Abstract: The Generalized Inverted Exponential (GIE) distribution is a mixed lifetime model used in a number of fields such as queuing theory, testing of products or components and modelling the speed of winds. The study aims to focus on the determination of maximum likelihood estimates of GIE distribution when the test units are progressively (type II) censored. The scheme permits the withdrawal of units from the life test at stages during failure. This may be due to cost and time constraints. Both Expectation- Maximization (EM) and Newton-Raphson (NR) methods have been used to obtain the maximum likelihood estimates of the GIE parameters. Also, the variance-covariance matrix of the obtained estimators has been derived. The performance of the obtained MLEs via EM method is compared with those obtained using NR method in terms of bias and root mean squared errors and confidence interval widths for different progressive type II censoring schemes at fixed parameter values of λ and θ Simulation results reveal that estimates obtained via EM approach are more robust compared to those obtained via NR algorithm. It's also noted that the bias, root mean squared errors and confidence interval widths decrease with an increase in the sample size for a fixed number of failures. A similar trend in results is observed with increase in number of failures for a fixed sample size. The results of the obtained estimators are finally illustrated on two real data sets.
Abstract: The Generalized Inverted Exponential (GIE) distribution is a mixed lifetime model used in a number of fields such as queuing theory, testing of products or components and modelling the speed of winds. The study aims to focus on the determination of maximum likelihood estimates of GIE distribution when the test units are progressively (type II) cens...
Show More
-
Estimating Population Total Using Machine Learning Logistic Regression: COVID-19 Pandemic Challenges Perspective
Issue:
Volume 10, Issue 1, January 2021
Pages:
22-31
Received:
8 January 2021
Accepted:
15 January 2021
Published:
22 January 2021
Abstract: The estimation of the population total in undeveloped and developing countries in the recent past has attracted a lot of interest to many researchers due to the sole purpose of planning resource allocation, personnel training and infrastructure in social, health, transport, communication and education. The comprehensive census survey in many countries are conducted every ten years but the government administration changes in many counties every four to five years due to the limit of government terms as per the constitution and therefore does not coincide with the time of census survey. Further, due to the emerging COVID-19 pandemic challenges that requires ministry of health protocols of social distance, the census survey in which the methods of questionnaire and personal interview are commonly used need to be avoided and therefore there is need to search for a better and reliable estimating models for estimating the population total which is the main focus of the study. The existing and developed methods of exponential and logistic class of population total estimating modes have been considered and compared. The main problem in the logistic models in estimating the population total is the estimation of the highest possible population that can be attained for each of the administrative units. In this study a machine learning logistic regression has been proposed and incorporated to search and estimate the constant using the supervised learning process. The performance of the methods have been compared using the Root Mean Square Error (RMSE) whose values were recorded as 1.062, 1.524, 0.477, 0.819 and 0.286 for the exponential, logistic I, Logistic II, logistic III and machine learning logistic (logistic IV) in which the proposed model performed better with the least square error value of 0.286. The proposed model was then used to project the population total and projected the population total for all regions as 51.00, 55.02, 62.50, 69.10, 74.65 and 79.14 in millions in the years 2024, 2029, 2039, 2049, 2059 and 2069 respectively.
Abstract: The estimation of the population total in undeveloped and developing countries in the recent past has attracted a lot of interest to many researchers due to the sole purpose of planning resource allocation, personnel training and infrastructure in social, health, transport, communication and education. The comprehensive census survey in many countr...
Show More
-
I-Optimal Axial Designs for Four Ingredient Concrete Experiment
Njoroge Elizabeth Wambui,
Koske Joseph,
Mutiso John
Issue:
Volume 10, Issue 1, January 2021
Pages:
32-37
Received:
13 November 2020
Accepted:
1 December 2020
Published:
28 January 2021
Abstract: Stakeholders in the construction industry work towards obtaining optimal concrete mixes with an aim of producing structures with the best compressive strength. In many instances, Kenya has witnessed collapse of buildings leading to death and huge financial loses, which has been associated largely to poor concrete mixes. This paper aims at evaluating the I-optimal designs for a concrete mixture experiment for both Equally Weighted Simplex Centroid Axial Design and Unequally Weighted Simplex Centroid Axial Design, based on the second-degree Kronecker model. Optimality tests are performed to locate the optimum values of a design. In various studies, I-optimality has been shown to be among the best criteria in obtaining the most optimal outcomes. In this study, Response Surface Methodology is applied in evaluating I-optimal designs, which are known to minimize average or integrated prediction variance over the experimental region. I-optimality equivalence conditions for the inscribed tetrahedral design and for the concrete experiment model are identical with the boundary points, mid-face points and the centroid, denoted by η2, η3 and η4 respectively. Equally, Weighted Simplex Centroid Axial Design proved to be a more I-efficient design than the Unequally Weighted Simplex Centroid Axial Design for both the tetrahedral design and the concrete model, with 87.85% and 79.54% respectively. The optimal response surface occurred in the region of the I-optimal designs. The Kronecker model derived from the concrete mixture experiment proved effective and efficient in describing the observed results.
Abstract: Stakeholders in the construction industry work towards obtaining optimal concrete mixes with an aim of producing structures with the best compressive strength. In many instances, Kenya has witnessed collapse of buildings leading to death and huge financial loses, which has been associated largely to poor concrete mixes. This paper aims at evaluatin...
Show More
-
The Two-Dimensional Infinite Heisenberg Classical Square Lattice: Zero-Field Partition Function and Correlation Length
Issue:
Volume 10, Issue 1, January 2021
Pages:
38-62
Received:
13 January 2021
Accepted:
6 February 2021
Published:
10 February 2021
Abstract: The nonlinear σ-model has known a new interest for it allows to describe the properties of two-dimensional quantum antiferromagnets which, when properly doped, become superconductors up to a critical temperature notably high compared to other types of superconducting materials. This model has been conjectured to be equivalent at low temperatures to the two-dimensional Heisenberg model. In this article we rigorously examine 2d-square lattices composed of classical spins isotropically coupled between first-nearest neighbors (i.e., showing Heisenberg couplings). A general expression of the characteristic polynomial associated with the zero-field partition function is established for any lattice size. In the infinite-lattice limit a numerical study allows to select the dominant term: it is written as a l-series of eigenvalues, each one being characterized by a unique index l whose origin is explained. Surprisingly the zero-field partition function shows a very simple exact closed-form expression valid for any temperature. The thermal study of the basic l-term allows to point out crossovers between l- and (l+1)-terms. Coming from high temperatures where the l=0-term is dominant and going to zero Kelvin, l-eigen¬values showing increasing l-values are more and more selected. At absolute zero l becomes infinite and all the successive dominant l-eigenvalues become equivalent. As the z-spin correlation is null for positive temperatures but equal to unity (in absolute value) at absolute zero the critical temperature is absolute zero. Using an analytical method similar to the one employed for the zero-field partition function we also give an exact expression valid for any temperature for the spin-spin correlations as well as for the correlation length. In the zero-temperature limit we obtain a diagram of magnetic phases which is similar to the one derived through a renormalization approach. By taking the low-temperature limit of the correlation length we obtain the same expressions as the corresponding ones derived through a renor¬malization process, for each zone of the magnetic phase diagram, thus bringing for the first time a strong validation to the full exact solution of the model valid for any temperature.
Abstract: The nonlinear σ-model has known a new interest for it allows to describe the properties of two-dimensional quantum antiferromagnets which, when properly doped, become superconductors up to a critical temperature notably high compared to other types of superconducting materials. This model has been conjectured to be equivalent at low temperatures to...
Show More
-
Principal Component Analysis of Birth Weight of Child, Maternal Pregnancy Weight and Maternal Pregnancy Body Mass Index: A Multivariate Analysis
Issue:
Volume 10, Issue 1, January 2021
Pages:
63-71
Received:
19 October 2020
Accepted:
28 October 2020
Published:
23 February 2021
Abstract: Background: Birth weight, maternal body mass index and maternal weight is perhaps the most important and reliable indicator for neonatal and infant survival as well as their physical growth and mental development. The main objective of this study was identifying the determinants of birth weight, maternal body mass index and maternal weight simultaneously based on Ethiopia demographic health survey 2016 which implemented in statistical package R. Methods: Cross sectional study design was used from Ethiopia demographic health survey 2016. From principal component model shows the total population variance of first two components were 97% of the variation then the two components replace the original three responses variables birth weight, maternal body mass index and maternal weight without much loss of information. Therefore bi-variate linear regression model was used to identify factors that affect the first two principal components of birth weight, maternal body mass index and maternal weight simultaneously. Results: This study shows family size, region, frequency of read newspaper, frequency of watch television and preferred waiting time for birth were statistically significant at 5% level of significance for first principal component. In addition, size of child, region and maternal age group are statistically significant for second principal components of birth weight of child, maternal pregnancy weight and maternal pregnancy body mass index in Ethiopia. Conclusion: From this finding family size, region, frequency of read newspaper, and frequency of watch television, size of child, maternal age group and preferred waiting time were significant predictors of the first two principal components simultaneously. Hence,-intervention should be given to the pregnant during antenatal care for minimizing the risk.
Abstract: Background: Birth weight, maternal body mass index and maternal weight is perhaps the most important and reliable indicator for neonatal and infant survival as well as their physical growth and mental development. The main objective of this study was identifying the determinants of birth weight, maternal body mass index and maternal weight simultan...
Show More
-
Studying Changes on Stock Market Transactions Using Different Techniques for Multivariate Time Series
Ahmed Mohamed Mohamed Elsayed
Issue:
Volume 10, Issue 1, January 2021
Pages:
72-88
Received:
2 February 2021
Accepted:
14 February 2021
Published:
26 February 2021
Abstract: There are many studies dealt with univariate time series data, but the analysis of multivariate time series are rarely discussed. This article discusses the theoretical and numerical aspects of different techniques that analyze the multivariate time series data. These techniques are ANN, ARIMA, GLM and VARS models. All techniques are used to analyze the data that obtained from Egypt Stock Exchange Market. R program with many packages are used. These packages are the "neuralnet, nnet, forecast, MTS and vars". The process of measuring the accuracy of forecasting are investigated using the measures ME, ACF, MAE, MPE, RMSE, MASE, and MAPE. This is done for seasonal and non-seasonal time series data. Best ARIMA model with minimum error is constructed and tested. The lags order of the model are identified. Granger test for causality indicated that Exchange rate is useful for forecasting another time series. Also, the Instant test indicated that there is instantaneous causality between Exchange rate and other time series. For non-seasonal data, the NNAR() model is equivalent to ARIMA() model. Also, for seasonal data, the NNAR(p,P,0)[m] model is equivalent to an ARIMA(p,0,0)(P,0,0)[m] model. For these data, we concluded that the ANN and GLMs of fitting multivariate seasonal time series is better than multivariate non-seasonal time series. The transactions of Finance, Household and Chemicals sectors are significant for Exchange rate in non-seasonal time series case. The forecasts that based on stationary time series data are more smooth and accurate. VARS model is more accurate rather than VAR model for ARIMA (0,0,1). Forecasts of VAR values are predicted over short horizon, because the prediction over long horizon becomes unreliable or uniform.
Abstract: There are many studies dealt with univariate time series data, but the analysis of multivariate time series are rarely discussed. This article discusses the theoretical and numerical aspects of different techniques that analyze the multivariate time series data. These techniques are ANN, ARIMA, GLM and VARS models. All techniques are used to analyz...
Show More