SysWatch Prospect

The SysWatch Prospect module concentrates on time series forecasts of relevant parameters as for instance power consumption. In the following sections FCE explains the Multivariate Forecast Model with all its versatility.



Similarity Sampling

Incremental Sampling

The similarity sampling predictor is a data driven forecasting algorithm. For a given multivariate stochastic process X=(x ⃗_1,…,x ⃗_M), and an embedding dimension L the state space is  constructed as follows  x_n=(x ⃗_n,x ⃗_(n-1),…,x ⃗_(n-L+1) ). In order to predict starting in time t a time step ∆t ahead we choose a ε-neighborhood〖 U〗_t^ε for x_t  defined by〖 U〗_t^ε={x_n: ‖x_n-x_t ‖_L< ε}, where ‖x_n-x_t ‖_L=∑_(i=0)^(L-1)▒‖x ⃗_(n-1)-x ⃗_(n-i) ‖ . The multivariate estimate for the future time t+∆t is then

$$\hat{x}_{t+delta t} : f(x^*) = max_{x in R} f(x)$$

Trajectory Sampling

A modification of the above described Incremental Sampling. Notice that Δ𝑡 can be the next time step or even a time sequence (1,…,H) defining a trajectory with horizon H.

Response Surface Regression

Kernel Regression

The Kernel Regression is a nonlinear data driven regression algorithm. In contrast to the similarity sampling predictor the AAKR is not a average of representative points. Instead it is a weighted sum over the whole data set, where similar points receive higher weights

$$ Formula $$

The trajectory is built by using the distribution function stretched by the first two moments defined above.

Other Regressions

It is straightforward to use any kind of nonlinear regression, such as, for instance

  • Neural Networks
  • Generalized Linear Models
  • Support Vector Machines


Stochastic Differential Equations

For an observed stochastic process X=(x ⃗_1,…,x ⃗_M) we assume an Ito diffusion behavior

$$〖dX〗_t=μ(t,X_t )dt+ σ(t,X_t )dW_t$$

The coefficient functions can be expressed and estimated by a wide variety of response surface algorithms.

In the second step the approach uses schemes like for instance time discretized Euler-Maruyama approximation defined by

$$X_(t+∆t)=X_t+μ(t,X_t )∆t+σ(t,X_t )∆t^□(64&1/2) ε_t$$

where〖 ε〗_t  is a sequence of independent random variable with standard normal distribution.

Sparse Grid Techniques

In this approach, the problem of analyzing a time series is first transformed into a higher-dimensional regression problem based on a delay embedding of the empirical data.Then, a grid-based approach is used to discretize the resulting high-dimensional feature space. In order to cope with the curse of dimensionality, we employ sparse grids in the form of the combination technique. Here, the regression problem is discretized and solved for a sequence of conventional grids with varying mesh widths. The sparse grid solution is then obtained by a linear combination of the solutions on these grids.