1
0
Fork 0
mirror of https://github.com/Findus23/Bachelors-Thesis.git synced 2024-08-27 19:52:12 +02:00
This commit is contained in:
Lukas Winkler 2019-08-29 10:56:46 +02:00
parent 74137645cf
commit e17c4be3ad
Signed by: lukas
GPG key ID: 54DE4D798D244853
10 changed files with 28 additions and 17 deletions

View file

@ -3,7 +3,7 @@
\label{chapter:simulations}
\section{Model}
For a realistic model of two gravitationally colliding bodies the smooth particle hydrodynamics (\texttt{SPH}) code \texttt{miluphCUDA} as explained in \cite{Schaefer2016} is used. It is able to simulate brittle failure and the interaction between multiple materials.
For a realistic model of two gravitationally colliding bodies the smooth particle hydrodynamics (\texttt{SPH}) code \texttt{miluphCUDA} as explained in \cite{Schaefer2016} and \cite{miluphaCode} is used. It is able to simulate brittle failure and the interaction between multiple materials.
In the simulation two celestial bodies are placed far enough apart so that tidal forces can affect the collision (5 times the sum of the radii). Both objects consist of a core with the physical properties of basalt rocks and an outer mantle made of water ice. These two-body-collisions are similar to those that happen between protoplanets or the collision that created the Earth's Moon.\footcite{dvorakMoon}
@ -12,24 +12,24 @@ To keep the simulation time short and make it possible to do many simulations wi
\section{Parameters}
\label{sec:parameters}
Six parameters have been identified that have a major influence on the result of a two-body collision. All selected parameter ranges are inspired by the physical properties that occurred in collisions in a simulation of an early solar system.\footcite{CollisionParameters}
Six parameters have been identified that have a major influence on the result of a two-body collision. All selected parameter ranges are inspired by the physical properties that occurred in collisions in a simulation of an early solar system.\footcite{CollisionParameters} (Table \ref{tab:first_simulation_parameters})
\subsection{impact velocity}
\subsection{Impact velocity}
The collision velocity $v_0$ is defined in units of the mutual escape velocity $v_{esc}$ of the projectile and the target.\footcite{MaindlSummary} Simulations have been made from $v_0=1$ to $v_0=5$. As one expects, a higher velocity results in a stronger collision and more and smaller fragments.
\begin{equation}
v_{esc}=\sqrt{\frac{2G(M_p+M_t)}{r_p+r_t}}
\end{equation}
\subsection{impact angle}
\subsection{Impact angle}
The impact angle is defined in a way that $\alpha=\ang{0}$ corresponds to a head-on collision and higher angles increase the chance of a hit-and-run encounter. The simulation parameters range from $\alpha=\ang{0}$ to $\alpha=\ang{60}$
\subsection{target and projectile mass}
\subsection{Target and projectile mass}
The total masses in these simulations range from about two Ceres masses (\SI{1.88e+21}{\kilogram}) to about two earth masses (\SI{1.19e+25}{\kilogram}). In addition to the total mass $m$, the mass fraction between projectile and target $\gamma$ is defined. As the whole setup is symmetrical between the two bodies, only mass fractions below and equal to one have been considered.
\subsection{water fraction of target and projectile}
\subsection{Water fraction of target and projectile}
The last two parameters are the mass fraction of the ice to the total mass of each of the bodies. To keep the numbers of parameter combinations and therefore required simulations low, only \SI{10}{\percent} and \SI{20}{\percent} are simulated in the first simulation set.
@ -59,7 +59,7 @@ miluphcuda -N 20000 -I rk2_adaptive -Q 1e-4 -n 300 -a 0.5 -H -t 144.0 -f impact
This simulation ran on the \texttt{amanki} server using a \texttt{Nvidia GTX 1080} taking about \SI{30}{\minute} per simulation as the \texttt{Nvidia GTX 1080} is the fastest consumer GPU for this simulation set in a comparison of 13 tested GPUs.\footcite{Dorninger} Of these 960 simulations, 822 succeed and were used in the analysis.
\section{Post-Processing}
\section{Post-processing}
\label{sec:postprocessing}
After the simulation the properties of the SPH particles needs to be analyzed. To do this, the \texttt{identify\_fragments} C program by Christoph Burger (part of the post-processing tools of \texttt{miluphCUDA}) uses a friends-of-friends algorithm to group the final particles into fragments. Afterwards \texttt{calc\_aggregates} calculates the mass of the two largest fragments together with their gravitationally bound fragments and its output is written into a simple text file (\texttt{aggregates.txt}).

View file

@ -1,5 +1,5 @@
% !TeX spellcheck = en_US
\chapter{Results}
\chapter{Simulation outcome}
\label{chapter:results}
For the large set of simulations, we can now extract the needed values. The output of the relaxation program (\texttt{spheres\_ini\_log}) gives us the precise values for impact angle and velocity and the exact masses of all bodies. As these values differ slightly from the parameters explained in Section \ref{sec:parameters} due to the setup of the simulation, in the following steps only the precise values from \texttt{spheres\_ini\_log} are considered. From the \texttt{aggregates.txt} explained in Section \ref{sec:postprocessing} the final masses and water fractions of the two largest fragments are extracted. From these, the main output considered in this analysis, the water retention of the two fragments, can be calculated.

View file

@ -1,5 +1,5 @@
% !TeX spellcheck = en_US
\section{Multidimensional Linear Interpolation}
\section{Multidimensional linear interpolation}
\subsection{Theory}
@ -50,7 +50,7 @@ In two dimensions things get more complicated as we now have a set of points wit
This approach has the advantage that it can be extended in more than two dimensions by replacing the triangle in the Delaunay triangulation with an n-simplex in n dimensions. The \texttt{scipy.spatial.Delaunay} python function allows to quickly calculate it thanks to the \texttt{Qhull} library\footnote{\url{http://www.qhull.org/}}. One noticeable limitation of this method is that data can't be extrapolated. Therefore the possible output is limited to the convex hull of the input parameter space (as seen in Figure \ref{fig:3dinterpolate-2}).
This approach has the advantage that it can be extended in more than two dimensions by replacing the triangle in the Delaunay triangulation\todo{nebensatz mit Zitat} with an n-simplex in n dimensions. The \texttt{scipy.spatial.Delaunay} python function allows to quickly calculate it thanks to the \texttt{Qhull} library\footnote{\url{http://www.qhull.org/}}. One noticeable limitation of this method is that data can't be extrapolated. Therefore the possible output is limited to the convex hull of the input parameter space (as seen in Figure \ref{fig:3dinterpolate-2}).
\subsection{Implementation}
\label{sec:griddata-implementation}

View file

@ -3,7 +3,7 @@
\subsection{Theory}
Another approach to interpolate data is using \textit{Radial Basis Functions}. A very good explanation on how they work is given in \cite{RBF} which is shortly summarized below:
Another approach to interpolate data is using \textit{Radial Basis Functions} (\texttt{RBF}). A very good explanation on how they work is given in \cite{RBF} which is shortly summarized below:
A function $\phi$ for which $\phi(x)=\phi(\left\|x\right\|)$ is true is called \textit{radial}. Now to be able to interpolate, we need to find the interpolation function $s(x)$ which is the same as the given values $p_i$ in all points.

View file

@ -26,7 +26,7 @@ Another method that is good at taking pairs of input and output values and then
The idea behind artificial neural networks is trying to emulate the functionality of neurons by having nodes that are connected to each others. The weights $w$ of these connections are modified during the training to represent the training data and can then be used to predict new results for input values not seen in the training data.
Every neural network needs an input layer with as many nodes as input parameters and an output layer with a node for every output value. In between, there can be multiple hidden layers with an arbitrary amount of nodes. (Figure \ref{fig:neuralnetwork-general})
Every neural network needs an input layer with as many nodes as input parameters and an output layer with a node for every output value. In between, there can be multiple hidden layers with an arbitrary number of nodes. (Figure \ref{fig:neuralnetwork-general})
If we first only consider a single neuron, then on every iteration it calculates the sum over all input values multiplied with their weight $w$. Afterwards, an activation function $g$ is applied to the sum $z$ to get the prediction $\hat{y}$.
@ -42,7 +42,7 @@ After this first step (the \textit{feedforward}) is done, the weights can be mod
L(\hat{y},y)=\sum_{i}(\hat{y}_i-y_i)^2
\end{equation}
To update the weights, the derivative of the Loss function with respect to the weights is calculated and added to the existing weights.\todo{more details?}\footcite{NN-python}
To update the weights, the derivative of the Loss function with respect to the weights is calculated and added to the existing weights.\footcite{NN-python}
\subsection{Implementation}

View file

@ -1,7 +1,7 @@
% !TeX spellcheck = en_US
\section{Comparison}
\label{sec:comparison}
\todo{To comparison}
To compare the three methods explained above and measure their accuracy an additional set of 100 simulations (with the same properties as the ones listed in Section \ref{sec:resimulation}) was created. These results are neither used to train or select the neural network, nor are in the dataset for griddata and RBF interpolation. Therefore, we can use them to generate predictions for their parameters and compare them with the real fraction of water that remained in those simulations. By taking the mean absolute difference and the mean squared error between the predictions and the real result, the accuracy of the different methods can be estimated (Table \ref{tab:comparison}). As one of these parameter sets is outside the convex hull of the training data and griddata can't extrapolate, this simulation is skipped and only the remaining 99 simulations are considered for the griddata accuracy calculation.
Of the three methods, the trained neural network has the highest mean squared error. This seems to be at least partly caused by the fact that during training the neural network, the data is generalized, causing the final network to output the \enquote{smoothest} interpolations. While this causes the errors to be higher, it might be possible that the fine structured details in the simulation output is just a artifact of the simulation setup and doesn't represent real world collisions.

View file

@ -6,7 +6,7 @@ While this thesis focuses on the water retention after the collisions, the same
When applying the same comparison as described in Section \ref{sec:comparison} the interpolations seem to have a lower accuracy, but still RBF interpolation gives the best results considering slow speed of griddata.
\begin{table}
\begin{table}[h]
\centering
\begin{tabular}{rcc}
& {mean squared error} & {mean error} \\

View file

@ -177,4 +177,15 @@
owner = {lukas},
}
@Article{miluphaCode,
author = {Schäfer, C. M. and Wandel, O. and Burger, C. and Maindl, T. I. and Malamud, U. and Buruchenko, S. K. and Sfair, R. and Audiffren, H. and Winter, P. M.},
title = {{A versatile smooth hydrodynamics code for graphics cards}},
journaltitle = {Computers and Mathematics with Applications},
date = {2019-08},
pages = {submitted},
eprint = {0705.2672},
eprinttype = {arXiv},
keywords = {smooth particle hydrodynamics, GPU-computing, hydrodynamics, continuum mechanics},
}
@Comment{jabref-meta: databaseType:biblatex;}

View file

@ -15,7 +15,7 @@
\begin{titlepage}
\vspace{4cm}
\vspace*{4cm}
\centering
{\huge\sffamily\bfseries Interpolated water retention after two-body collisions using Neural Networks and linear interpolation methods\par}

View file

@ -89,7 +89,7 @@ american, % language of the document
backend=biber,
style=authoryear-comp, % choose a style from https://de.overleaf.com/learn/latex/Biblatex_citation_styles
%sortlocale=de_AT,
sortlocale=en_GB,
sortlocale=en_US,
backref=true % use if you like it -- puts a link to the page where it is cited into the bibliography
]{biblatex}