Data-Driven Learning for Approximating Dynamical Systems Using Deep Neural Networks

In this thesis, a one-step approximation method has been used to produce approximations of two dynamical systems. The two systems considered are a pendulum and a damped dual-mass-spring system. Using a method for a one-step approximation proposed by [15] it is first shown that the state variables of...

Full description

Bibliographic Details
Main Authors: Dernsjö, Axel, Berg Wahlström, Max
Format: Others
Language:English
Published: KTH, Skolan för teknikvetenskap (SCI) 2021
Subjects:
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297685
Description
Summary:In this thesis, a one-step approximation method has been used to produce approximations of two dynamical systems. The two systems considered are a pendulum and a damped dual-mass-spring system. Using a method for a one-step approximation proposed by [15] it is first shown that the state variables of a general dynamical system one time-step ahead can be expressed using a concept called effective increment. The state of the system one time-step ahead then only depends on the previous state and the effective increment, and this effective increment in turn only depends on the previous state and the governing equation of the dynamical system. By introducing the concept of neural networks and surrounding concepts it is presented that a neural network could be trained to approximate this effective increment, thereby negating the need to have a known governing equation when determining the system state. The solution to a general dynamical system can then be approximated using only the trained neural network operator and a state variable to produce the state variable one discrete time-step ahead. When training the neural network operator to approximate the effective increment, the analytical solutions to two dynamical systems are used to produce large amounts of training data on which the network can be trained. Using the optimizer algorithm Adam [8] and the collected training data the network parameters were changed to make the difference between the output of the network and some target value small, the target value, in this case, being the correct state variable one time-step ahead. The results show that training a neural network to be able to produce approximations of a dynamical system is possible, but if one wants to produce more accurate approximations of more complex systems than the ones considered in this thesis, greater care has to be taken when choosing parameters of the network as well as tweaking the hyper-parameters of the optimizer Adam. Furthermore, the structure of the network could be tweaked by changing the number of hidden layers and the number of nodes in them.