mirror of
https://github.com/Findus23/BachelorsThesis.git
synced 20240827 19:52:12 +02:00
split up in multiple tex files
This commit is contained in:
parent
7c06a37f53
commit
9181bad44e
7 changed files with 339 additions and 331 deletions
85
20_simulations.tex
Normal file
85
20_simulations.tex
Normal file

@ 0,0 +1,85 @@


\chapter{Simulations}




\section{Model}




For a realistic model of two gravitationally colliding bodies the SPH (\textit{smooth particle hydrodynamics}) code \texttt{miluphCUDA} as explained in \cite{Schaefer2016} is used. It is able to simulate brittle failure and the interaction between multiple materials.




In the simulation two celestial bodies are placed far enough apart so that tidal forces can affect the collision. Both objects consist of a core with the physical properties of basalt rocks and an outer mantle made of water ice.




To keep the simulation time short and make it possible to do many simulations with varying parameters 20k SPH particles are used and each simulation is ran for 300 timesteps of each \SI{144}{\second} so that a whole day of collision is simulated.






\section{Parameters}


\label{sec:parameters}




Six parameters have been identified that have an influence on the result of the simulation:




\subsection{impact velocity}




The collision velocity $v_0$ is defined in units of the mutual escape velocity of the projectile and the target. Simulations have been made from $v_0=1$ to $v_0=5$. As one expects, a higher velocity results in a stronger collision and more and smaller fragments.




\subsection{impact angle}




The impact angle is defined in a way that $\alpha=\ang{0}$ corresponds to a headon collision and higher angles increase the chance of a hitandrun encounter. The simulation ranges from $\alpha=\ang{0}$ to $\alpha=\ang{60}$




\subsection{target and projectile mass}




The masses in this simulation range from about two Ceres masses (\SI{1.88e+21}{\kilogram}) to about two earth masses (\SI{1.19e+25}{\kilogram}). In addition to the total mass $m$, the mass fraction between projectile and target $\gamma$ is defined. As the whole simulation setup is symmetrical between the two bodies only mass fractions below and equal to one have been considered.




\subsection{water fraction of target and projectile}




The last two parameters are the mass fraction of the ice to the total mass of each of the bodies. To keep the numbers of parameter combinations and therefore required simulations low only \SI{10}{\percent} and \SI{20}{\percent} are simulated in the first simulation set.






\begin{table}


\centering


\begin{tabular}{rSSSSS}


$v_0$ & 1 & 1.5 & 2&3 & 5 \\


$\alpha$ & \ang{0} & \ang{20} & \ang{40} & \ang{60} &\\


$m$ &\SI{e21}{\kilogram} & \SI{e23}{\kilogram} & \SI{e24}{\kilogram} & \SI{e25}{\kilogram} &\\


$\gamma$ & 0.1 & 0.5 & 1 &&\\


water fraction target & \SI{10}{\percent} & \SI{20}{\percent} &&&\\


water fraction projectile & \SI{10}{\percent} & \SI{20}{\percent} &&&\\


\end{tabular}


\caption{parameter set of the first simulation run}


\label{tab:first_simulation_parameters}


\end{table}




\section{Execution}\todo{think of a better title}




In the first simulation run for every parameter combination from Table \ref{tab:first_simulation_parameters} a separate simulation has been started. First the parameters and other configuration options are written in a \mbox{\texttt{simulation.input}} text file. Afterwards the relaxation program described in \cite[24\psqq]{Burger2018} generates relaxed initial conditions for all 20k particles and saves their state to \texttt{impact.0000}. Finally \texttt{miluphcuda} can be executed with the following arguments to simulate starting from this initial condition for 300 timesteps which each will be saved in a \texttt{impact.XXXX} file.




\todo{spacing is ugly}




\begin{lstlisting}[language=bash,flexiblecolumns=false]


miluphcuda N 20000 I rk2_adaptive Q 1e4 n 300 a 0.5 H t 144.0 f impact.0000 m material.cfg s g


\end{lstlisting}




This simulation run ran on the \texttt{amanki} server using a \texttt{Nvidia GTX 1080} taking about \SI{30}{\minute} per simulation.






\section{PostProcessing}


\label{sec:postprocessing}




After the simulation the properties of the SPH particles needs to be analyzed. To do this the \texttt{identify\_fragments} C program by Christoph Burger\todo{better citation} uses a friendsoffriends algorithm to group the final particles into fragments. Afterwards \texttt{calc\_aggregates} calculates the mass of the two largest fragments together with their gravitationally bound fragments and its output is written into a simple text file (\texttt{aggregates.txt}). This way, the mass retention (total mass of the two largest fragments compared to total mass of projectile and trget) and the water retention can be determined for every simulation result.




\section{Resimulation}




To increase the amount of available data and especially reduce the errors caused by the gridbased parameter choices (Table \ref{tab:first_simulation_parameters}) a second simulation run has been started has been started. All source code and initial parameters have been left the same apart from the six main input parameters described above. These will be set to a random value in the range listed in Table \ref{tab:resimulationparameters} apart from the the initial water fractions. As they seem to have little impact on the outcome (see Section \ref{sec:cov}) they are set fixed to \SI{15}{\percent} to simplify the parameter space.




\begin{table}


\centering


\begin{tabular}{rSS}


& min&max\\\hline


$v_0$ & 1 & 5 \\


$\alpha$ & \ang{0} & \ang{60} \\


$m$ &\SI{1.88e+21}{\kilogram} & \SI{1.19e+25}{\kilogram}\\


$\gamma$ & 0.1& 1 \\


water fraction target & \SI{15}{\percent} & \SI{15}{\percent} \\


water fraction projectile & \SI{15}{\percent} & \SI{15}{\percent} \\


\end{tabular}


\caption{text}


\label{tab:resimulationparameters}


\end{table}




This way, an addition \num{553} simulations have been calculated on \texttt{Nvidia Tesla P100} graphics cards on Google Cloud. (Of which 100 simulations are only used for comparison in Section \ref{comparison})

15
30_results.tex
Normal file
15
30_results.tex
Normal file

@ 0,0 +1,15 @@


\chapter{Results}




For the large set of simulations we can now extract the needed value. The output of the relaxation program (\texttt{spheres\_ini\_log}) gives us the precise values for impact angle and velocity and the exact masses of all bodies. As theses values slightly differ from the parameters explained in Section \ref{sec:parameters} due to the setup of the simulation, in the following steps only the precise values are considered. From the \texttt{aggregates.txt} explained in Section \ref{sec:postprocessing} the final masses and water fractions of the two largest fragments are extracted. From these the main output considered in this analysis, the water retention of the two fragments can be calculated.






\section{Correlations}


\label{sec:cov}


One very easy, but sometimes flawed\footnote{\todo[inline]{explain issues with pearson}} way to look at the whole dataset at once is calculating the \textit{Pearson correlation coefficient} between the input parameters and the output water fraction (Figure \ref{fig:cov}). This shows the expected result that a higher collision angle (so a more hitandrun like collision) has a higher water retention and a higher collision speed results in far less water left on the two largest remaining fragments. In addition higher masses seem to result in less water retention. The initial water fractions of the two bodies does seem to have very little influence on the result of the simulations.




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.8\linewidth]{images/cov.pdf}


\caption{TODO}


\label{fig:cov}


\end{figure}

5
40_interpolations.tex
Normal file
5
40_interpolations.tex
Normal file

@ 0,0 +1,5 @@


\chapter{Interpolations}




\input{41_griddata.tex}


\input{42_rbf.tex}


\input{43_nn.tex}

51
41_griddata.tex
Normal file
51
41_griddata.tex
Normal file

@ 0,0 +1,51 @@


\section{Multidimensional Linear Interpolation}




\subsection{Theory}




One of the easiest ways to interpolate a new value between two known values is linear interpolation. It takes the closest values and creates a linear function between them.




In one dimension linear interpolation is pretty trivial. For example, let's assume that we have 20 random points $P$ between 0 and 1 (\textcolor{Red}{\textbullet} and \textcolor{Blue}{\textbullet} in Figure \ref{fig:onediminterpolation}) and have a new point $I$ (\textcolor{Green}{\textbullet}) at $0.4$ for which we want to interpolate. Finding the two closest points \textcolor{Red}{\textbullet} above and below is trivial as there is only one dimension to compare. Now, if we have measured a value $f(P)$ for each of these points, a straight line (\textcolor{LightGreen}{\textbf{}}) between the two closest values can be drawn and an interpolated value for $f(I)$ can be found.




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.8\linewidth]{images/vis1d.pdf}


\caption{onedimensional example of linear interpolation}


\label{fig:onediminterpolation}


\end{figure}




In two dimensions things get more complicated as we now have a set of points with $X$ and $Y$ coordinates (Figure \ref{fig:3dinterpolate1}). One fast way to find the closest points to the point that should be interpolated is using Delaunay triangulation. This separates the space between the points into triangles while trying to maximize their smallest angle. Afterwards the closest three points can be found very quickly by checking the nodes of the surrounding triangle (Figure \ref{fig:3dinterpolate2}). If we now again have a function $f(X,Y)$ similar to the onedimensional example (Figure \ref{fig:3dinterpolate3}), we can create a unique plain through the three points and get the interpolated value for any $X$ and $Y$ on this layer.






\begin{figure}[h] % also temporary


\centering


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/vis2d1.pdf}


\caption{Lorem ipsum}


\label{fig:3dinterpolate1}


\end{subfigure}%


~


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/vis2d2.pdf}


\caption{Lorem ipsum, lorem ipsum,Lorem ipsum, lorem ipsum,Lorem ipsum}


\label{fig:3dinterpolate2}


\end{subfigure}


\caption{Caption place holder}




\end{figure}




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.8\linewidth]{images/vis2d3.pdf}


\caption{twodimensional example of linear interpolation}


\label{fig:3dinterpolate3}


\end{figure}




This approach has the advantage that it can be extended in more than two dimensions by replacing the triangle in the Delaunay triangulation with an nsimplex in n dimensions. The \texttt{scipy.spatial.Delaunay} python function allows to quickly calculate it thanks to the \texttt{Qhull} library\footnote{\url{http://www.qhull.org/}}. One noticeable limitation of this method is that data can't be extrapolated. Therefore the possible output is limited to the convex hull of the input parameter space (as seen in Figure \ref{fig:3dinterpolate2}).




\subsection{Implementation}


\label{sec:griddataimplementation}


For doing the actual interpolations, the \texttt{scipy.interpolate.griddata} function is used with the \texttt{method="linear"} argument which itself uses \texttt{scipy.interpolate.LinearNDInterpolator} to do the interpolation as described above. The function is given a $6\times n$ matrix of the six parameters and an $n$sized list of the water retention fraction for those $n$ simulations. In addition, \texttt{griddata} supports not only calculating interpolations for one set of parameters, but also for lists of parameters which allows to quickly generate 2d diagrams as seen in \ref{the_chapter_where_I_show_the_v_alpha_plots}.




\subsection{Results}

118
42_rbf.tex
Normal file
118
42_rbf.tex
Normal file

@ 0,0 +1,118 @@




\section{RBF interpolation}




\subsection{Theory}




Another approach to interpolate data is using \textit{Radial Basis Functions}. A very good explanation on how they work is given in \cite{RBF} which is shortly summarized below:




A function $\phi$ for which $\phi(x)=\phi(\left\x\right\)$ is true is called \textit{radial}. Now to be able to interpolate, we need to find the interpolation function $s(x)$ which is the same as the given values $p_i$ in all points.




\begin{align}


s(x_i)=p_i,\quad i=1,2,\dots,n


\end{align}




The RBF interpolation now consists of a linear combination of $\phi(\left\xx_i\right\)$ for a chosen radial function $\phi$ which $n$ constants $\lambda_i$.




\begin{align}


s(x)&= \sum_{i=1}^{n}\lambda_i\phi(\left\xx_i\right\) \\


%


p_j&=\sum_{i=1}^{n}\lambda_i\phi(\left\x_jx_i\right\),\quad j=1,2,\dots,n


\end{align}




Therefore this can be written as a linear matrix equation:




\begin{align}


\begin{bmatrix}


\phi(\left\x_1x_1\right\) & \phi(\left\x_2x_1\right\) & \dots & \phi(\left\x_nx_1\right\) \\


\phi(\left\x_1x_2\right\) & \phi(\left\x_2x_2\right\) & \dots & \phi(\left\x_nx_2\right\) \\


\vdots & \vdots & \ddots & \vdots \\


\phi(\left\x_1x_n\right\) & \phi(\left\x_2x_n\right\) & \dots & \phi(\left\x_nx_n\right\)


\end{bmatrix}


\begin{bmatrix}


\lambda_1 \\


\lambda_2 \\


\vdots \\


\lambda_n


\end{bmatrix}


=


\begin{bmatrix}


p_1 \\


p_2 \\


\vdots \\


p_n


\end{bmatrix}


\end{align}


or simply


\begin{align}


\Phi\lambda=p


\end{align}




with $\Phi$ being a symmetric $n \times n $ matrix as $\left\x_jx_i\right\=\left\x_ix_j\right\$. There are many possibilities for the radial basis function $\phi(r)$. It can be for example linear ($r$), gaussian ($e^{r^2}$) or multiquadric ($\sqrt{\left(\frac{r}{\epsilon}\right)^2 + 1}$) with $\epsilon$ being a constant that defaults to the approximate average distance between nodes.




As an example, consider the three points $x_1=0$, $x_1=3$ and $x_1=5$ with $p(x_1)=0.2$, $p(x_2)=0.8$ and $p(x_3)=0.1$ and choose the gaussian RBfunction\todo{looks odd} we get the following:


\begin{align}


\begin{bmatrix}


\phi(0) & \phi(3) & \phi(5) \\


\phi(3) & \phi(0) & \phi(2) \\


\phi(5) & \phi(2) & \phi(0)


\end{bmatrix}


\lambda


=


\begin{bmatrix}


1 & \num{1.23e4} & \num{1.39e11} \\


\num{1.23e4} & 1 & \num{1.83e2} \\


\num{1.39e11} & \num{1.83e2} & 1


\end{bmatrix}


\begin{bmatrix}


\lambda_1 \\


\lambda_2 \\


\lambda_3


\end{bmatrix}


=


\begin{bmatrix}


0.22\\0.8\\0.1


\end{bmatrix}


\end{align}




Solving this linear matrix equation using \texttt{numpy.linalg.solve} gives us the solution for $\lambda$:


\begin{equation}


\lambda=\begin{bmatrix}


0.200 \\0.798 \\0.085


\end{bmatrix}


\end{equation}




Combined we get the following linear combination for the interpolated function $s(x)$:


\begin{equation}


s(x)=0.200\phi(\left\x\right\)+


0.798\phi(\left\x3\right\)+


0.085\phi(\left\x5\right\)


\end{equation}








\begin{figure}[h] % also temporary


\centering


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/rbf1.pdf}


\caption{Lorem ipsum}


\label{fig:rbf1}


\end{subfigure}%


~


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/rbf2.pdf}


\caption{Lorem ipsum, lorem ipsum,Lorem ipsum, lorem ipsum,Lorem ipsum}


\label{fig:rbf2}


\end{subfigure}


\caption{Caption place holder}




\end{figure}




Applying the same method to a list of random points allows to interpolate their values for arbitrary other points like the green point on the sinuslike curve in Figure \ref{fig:rbf2}. This can also be trivially extended in $m$ dimensions by replacing $x$ with an $x\in\mathbb{R}^m$ and using a norm in $\mathbb{R}^m$ for $\left\\ \right\$.




\subsection{Implementation}




The scipy function \texttt{scipy.interpolate.Rbf} allows directly interpolating a value similar to \texttt{griddata} in Section \ref{sec:griddataimplementation}. A difference in usage is that it only allows interpolating a single value, but as it is pretty quick it is possible to calculate multiple values sequentially.




\subsection{Results}

60
43_nn.tex
Normal file
60
43_nn.tex
Normal file

@ 0,0 +1,60 @@




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.4\linewidth]{images/graphviz/general.pdf}


\caption{}


\label{fig:neuralnetworkgeneral}


\end{figure}




\section{Neural Networks}




Another method that is good at taking pairs of input and output values and then able to predict output values for arbitrary input sets is using \textit{Artificial neural networks} (\texttt{ANNs}).




\subsection{Theory}




The idea behind artificial neural networks is trying to emulate the functionality of neurons by having nodes that are connected to each others. The weights $w$ of these connections are modified during the training to represent the training data and can then be used to predict new results for input values not seen in the training data.




Every neural network needs an input layer with as many nodes as input parameters and an output layer with a node for every output value. In between there can be multiple hidden layers with an arbitrary amount of nodes. (Figure \ref{fig:neuralnetworkgeneral})




If we first only consider a single neuron, then on every iteration it calculates the sum over all input values multiplied with their weight $w$. Afterwards an activation function $g$ is applied to the sum $z$ to get the prediction $\hat{y}$.




\begin{equation}


z=\sum_{i}w_ix_i \qquad \hat{y}=g(z)


\end{equation}




The nonlinear activation function allows the network to be able to approximate all types of functions instead of being just a linear function itself. Popular activation functions are the sigmoid function $\sigma(x)={\frac {1}{1+e^{x}}}$ and the ReLU function (\textit{rectified linear unit}, $f(x)=\max(0,x)$).\footcite{NNmath}




After this first step (the \textit{feedforward}) is done, the weights can be modified by comparing the prediction with the real output (the \textit{backpropagation}). The function that describes the error between them is called the Loss function and on possible form is the mean squared error function:




\begin{equation}


L(\hat{y},y)=\sum_{i}(\hat{y}_iy_i)^2


\end{equation}




To update the weights the derivative of the Loss function with respect to the weights is calculated and added to the existing weights.\todo{more details?}\footcite{NNpython}








\subsection{Implementation}




As building a neural network from scratch gets complex very quickly, it is easier to use \texttt{Keras}\footnote{\url{https://keras.io}} which provides easy to use highlevel functions over the calculations provided by \texttt{TensorFlow}\footnote{\url{https://www.tensorflow.org/}}. To build our network, we only need to specify the structure of the layers, take our input and let the network train for 200 epochs (iterations of feedforward and backpropagation).\todo{explain more once it is finished}




\begin{lstlisting}[language=Python]


model = Sequential()


model.add(Dense(6, input_dim=6, kernel_initializer='normal', activation='relu'))


model.add(Dense(5, kernel_initializer='normal', activation='relu'))


model.add(Dense(4, kernel_initializer='normal', activation='relu'))


model.add(Dense(1, kernel_initializer='normal'))


model.compile(loss='mean_squared_error', optimizer='adam')




model.fit(X, Y, epochs=200)


\end{lstlisting}\todo{just a placeholder}






\todo{more text on how exactly the data is split up once it is decided}




After the training the resulting model can be saved in a small \texttt{HDF5} file which can be used to evaluate the model very quickly (about \SI{100}{\milli\second} for \num{10000} interpolations).






\subsection{Results}




\subsection{Issues}

336
main.tex
336
main.tex

@ 35,341 +35,15 @@ To understand how the water transport works exactly one has to find an estimatio


\todo{And here the explanation of the chapters}






\chapter{Simulations}


\include{20_simulations}




\section{Model}


\include{30_results}




For a realistic model of two gravitationally colliding bodies the SPH (\textit{smooth particle hydrodynamics}) code \texttt{miluphCUDA} as explained in \cite{Schaefer2016} is used. It is able to simulate brittle failure and the interaction between multiple materials.




In the simulation two celestial bodies are placed far enough apart so that tidal forces can affect the collision. Both objects consist of a core with the physical properties of basalt rocks and an outer mantle made of water ice.




To keep the simulation time short and make it possible to do many simulations with varying parameters 20k SPH particles are used and each simulation is ran for 300 timesteps of each \SI{144}{\second} so that a whole day of collision is simulated.






\section{Parameters}


\label{sec:parameters}




Six parameters have been identified that have an influence on the result of the simulation:




\subsection{impact velocity}




The collision velocity $v_0$ is defined in units of the mutual escape velocity of the projectile and the target. Simulations have been made from $v_0=1$ to $v_0=5$. As one expects, a higher velocity results in a stronger collision and more and smaller fragments.




\subsection{impact angle}




The impact angle is defined in a way that $\alpha=\ang{0}$ corresponds to a headon collision and higher angles increase the chance of a hitandrun encounter. The simulation ranges from $\alpha=\ang{0}$ to $\alpha=\ang{60}$




\subsection{target and projectile mass}




The masses in this simulation range from about two Ceres masses (\SI{1.88e+21}{\kilogram}) to about two earth masses (\SI{1.19e+25}{\kilogram}). In addition to the total mass $m$, the mass fraction between projectile and target $\gamma$ is defined. As the whole simulation setup is symmetrical between the two bodies only mass fractions below and equal to one have been considered.




\subsection{water fraction of target and projectile}




The last two parameters are the mass fraction of the ice to the total mass of each of the bodies. To keep the numbers of parameter combinations and therefore required simulations low only \SI{10}{\percent} and \SI{20}{\percent} are simulated in the first simulation set.






\begin{table}


\centering


\begin{tabular}{rSSSSS}


$v_0$ & 1 & 1.5 & 2&3 & 5 \\


$\alpha$ & \ang{0} & \ang{20} & \ang{40} & \ang{60} &\\


$m$ &\SI{e21}{\kilogram} & \SI{e23}{\kilogram} & \SI{e24}{\kilogram} & \SI{e25}{\kilogram} &\\


$\gamma$ & 0.1 & 0.5 & 1 &&\\


water fraction target & \SI{10}{\percent} & \SI{20}{\percent} &&&\\


water fraction projectile & \SI{10}{\percent} & \SI{20}{\percent} &&&\\


\end{tabular}


\caption{parameter set of the first simulation run}


\label{tab:first_simulation_parameters}


\end{table}




\section{Execution}\todo{think of a better title}




In the first simulation run for every parameter combination from Table \ref{tab:first_simulation_parameters} a separate simulation has been started. First the parameters and other configuration options are written in a \mbox{\texttt{simulation.input}} text file. Afterwards the relaxation program described in \cite[24\psqq]{Burger2018} generates relaxed initial conditions for all 20k particles and saves their state to \texttt{impact.0000}. Finally \texttt{miluphcuda} can be executed with the following arguments to simulate starting from this initial condition for 300 timesteps which each will be saved in a \texttt{impact.XXXX} file.




\todo{spacing is ugly}




\begin{lstlisting}[language=bash,flexiblecolumns=false]


miluphcuda N 20000 I rk2_adaptive Q 1e4 n 300 a 0.5 H t 144.0 f impact.0000 m material.cfg s g


\end{lstlisting}




This simulation run ran on the \texttt{amanki} server using a \texttt{Nvidia GTX 1080} taking about \SI{30}{\minute} per simulation.






\section{PostProcessing}


\label{sec:postprocessing}




After the simulation the properties of the SPH particles needs to be analyzed. To do this the \texttt{identify\_fragments} C program by Christoph Burger\todo{better citation} uses a friendsoffriends algorithm to group the final particles into fragments. Afterwards \texttt{calc\_aggregates} calculates the mass of the two largest fragments together with their gravitationally bound fragments and its output is written into a simple text file (\texttt{aggregates.txt}). This way, the mass retention (total mass of the two largest fragments compared to total mass of projectile and trget) and the water retention can be determined for every simulation result.




\section{Resimulation}




To increase the amount of available data and especially reduce the errors caused by the gridbased parameter choices (Table \ref{tab:first_simulation_parameters}) a second simulation run has been started has been started. All source code and initial parameters have been left the same apart from the six main input parameters described above. These will be set to a random value in the range listed in Table \ref{tab:resimulationparameters} apart from the the initial water fractions. As they seem to have little impact on the outcome (see Section \ref{sec:cov}) they are set fixed to \SI{15}{\percent} to simplify the parameter space.




\begin{table}


\centering


\begin{tabular}{rSS}


& min&max\\\hline


$v_0$ & 1 & 5 \\


$\alpha$ & \ang{0} & \ang{60} \\


$m$ &\SI{1.88e+21}{\kilogram} & \SI{1.19e+25}{\kilogram}\\


$\gamma$ & 0.1& 1 \\


water fraction target & \SI{15}{\percent} & \SI{15}{\percent} \\


water fraction projectile & \SI{15}{\percent} & \SI{15}{\percent} \\


\end{tabular}


\caption{text}


\label{tab:resimulationparameters}


\end{table}




This way, an addition \num{553} simulations have been calculated on \texttt{Nvidia Tesla P100} graphics cards on Google Cloud. (Of which 100 simulations are only used for comparison in Section \ref{comparison})




\chapter{Results}




For the large set of simulations we can now extract the needed value. The output of the relaxation program (\texttt{spheres\_ini\_log}) gives us the precise values for impact angle and velocity and the exact masses of all bodies. As theses values slightly differ from the parameters explained in Section \ref{sec:parameters} due to the setup of the simulation, in the following steps only the precise values are considered. From the \texttt{aggregates.txt} explained in Section \ref{sec:postprocessing} the final masses and water fractions of the two largest fragments are extracted. From these the main output considered in this analysis, the water retention of the two fragments can be calculated.






\section{Correlations}


\label{sec:cov}


One very easy, but sometimes flawed\footnote{\todo[inline]{explain issues with pearson}} way to look at the whole dataset at once is calculating the \textit{Pearson correlation coefficient} between the input parameters and the output water fraction (Figure \ref{fig:cov}). This shows the expected result that a higher collision angle (so a more hitandrun like collision) has a higher water retention and a higher collision speed results in far less water left on the two largest remaining fragments. In addition higher masses seem to result in less water retention. The initial water fractions of the two bodies does seem to have very little influence on the result of the simulations.




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.8\linewidth]{images/cov.pdf}


\caption{TODO}


\label{fig:cov}


\end{figure}




\chapter{Interpolations}




\section{Multidimensional Linear Interpolation}




\subsection{Theory}




One of the easiest ways to interpolate a new value between two known values is linear interpolation. It takes the closest values and creates a linear function between them.




In one dimension linear interpolation is pretty trivial. For example, let's assume that we have 20 random points $P$ between 0 and 1 (\textcolor{Red}{\textbullet} and \textcolor{Blue}{\textbullet} in Figure \ref{fig:onediminterpolation}) and have a new point $I$ (\textcolor{Green}{\textbullet}) at $0.4$ for which we want to interpolate. Finding the two closest points \textcolor{Red}{\textbullet} above and below is trivial as there is only one dimension to compare. Now, if we have measured a value $f(P)$ for each of these points, a straight line (\textcolor{LightGreen}{\textbf{}}) between the two closest values can be drawn and an interpolated value for $f(I)$ can be found.




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.8\linewidth]{images/vis1d.pdf}


\caption{onedimensional example of linear interpolation}


\label{fig:onediminterpolation}


\end{figure}




In two dimensions things get more complicated as we now have a set of points with $X$ and $Y$ coordinates (Figure \ref{fig:3dinterpolate1}). One fast way to find the closest points to the point that should be interpolated is using Delaunay triangulation. This separates the space between the points into triangles while trying to maximize their smallest angle. Afterwards the closest three points can be found very quickly by checking the nodes of the surrounding triangle (Figure \ref{fig:3dinterpolate2}). If we now again have a function $f(X,Y)$ similar to the onedimensional example (Figure \ref{fig:3dinterpolate3}), we can create a unique plain through the three points and get the interpolated value for any $X$ and $Y$ on this layer.






\begin{figure}[h] % also temporary


\centering


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/vis2d1.pdf}


\caption{Lorem ipsum}


\label{fig:3dinterpolate1}


\end{subfigure}%


~


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/vis2d2.pdf}


\caption{Lorem ipsum, lorem ipsum,Lorem ipsum, lorem ipsum,Lorem ipsum}


\label{fig:3dinterpolate2}


\end{subfigure}


\caption{Caption place holder}




\end{figure}




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.8\linewidth]{images/vis2d3.pdf}


\caption{twodimensional example of linear interpolation}


\label{fig:3dinterpolate3}


\end{figure}




This approach has the advantage that it can be extended in more than two dimensions by replacing the triangle in the Delaunay triangulation with an nsimplex in n dimensions. The \texttt{scipy.spatial.Delaunay} python function allows to quickly calculate it thanks to the \texttt{Qhull} library\footnote{\url{http://www.qhull.org/}}. One noticeable limitation of this method is that data can't be extrapolated. Therefore the possible output is limited to the convex hull of the input parameter space (as seen in Figure \ref{fig:3dinterpolate2}).




\subsection{Implementation}


\label{sec:griddataimplementation}


For doing the actual interpolations, the \texttt{scipy.interpolate.griddata} function is used with the \texttt{method="linear"} argument which itself uses \texttt{scipy.interpolate.LinearNDInterpolator} to do the interpolation as described above. The function is given a $6\times n$ matrix of the six parameters and an $n$sized list of the water retention fraction for those $n$ simulations. In addition, \texttt{griddata} supports not only calculating interpolations for one set of parameters, but also for lists of parameters which allows to quickly generate 2d diagrams as seen in \ref{the_chapter_where_I_show_the_v_alpha_plots}.




\subsection{Results}




\section{RBF interpolation}




\subsection{Theory}




Another approach to interpolate data is using \textit{Radial Basis Functions}. A very good explanation on how they work is given in \cite{RBF} which is shortly summarized below:




A function $\phi$ for which $\phi(x)=\phi(\left\x\right\)$ is true is called \textit{radial}. Now to be able to interpolate, we need to find the interpolation function $s(x)$ which is the same as the given values $p_i$ in all points.




\begin{align}


s(x_i)=p_i,\quad i=1,2,\dots,n


\end{align}




The RBF interpolation now consists of a linear combination of $\phi(\left\xx_i\right\)$ for a chosen radial function $\phi$ which $n$ constants $\lambda_i$.




\begin{align}


s(x)&= \sum_{i=1}^{n}\lambda_i\phi(\left\xx_i\right\) \\


%


p_j&=\sum_{i=1}^{n}\lambda_i\phi(\left\x_jx_i\right\),\quad j=1,2,\dots,n


\end{align}




Therefore this can be written as a linear matrix equation:




\begin{align}


\begin{bmatrix}


\phi(\left\x_1x_1\right\) & \phi(\left\x_2x_1\right\) & \dots & \phi(\left\x_nx_1\right\) \\


\phi(\left\x_1x_2\right\) & \phi(\left\x_2x_2\right\) & \dots & \phi(\left\x_nx_2\right\) \\


\vdots & \vdots & \ddots & \vdots \\


\phi(\left\x_1x_n\right\) & \phi(\left\x_2x_n\right\) & \dots & \phi(\left\x_nx_n\right\)


\end{bmatrix}


\begin{bmatrix}


\lambda_1 \\


\lambda_2 \\


\vdots \\


\lambda_n


\end{bmatrix}


=


\begin{bmatrix}


p_1 \\


p_2 \\


\vdots \\


p_n


\end{bmatrix}


\end{align}


or simply


\begin{align}


\Phi\lambda=p


\end{align}




with $\Phi$ being a symmetric $n \times n $ matrix as $\left\x_jx_i\right\=\left\x_ix_j\right\$. There are many possibilities for the radial basis function $\phi(r)$. It can be for example linear ($r$), gaussian ($e^{r^2}$) or multiquadric ($\sqrt{\left(\frac{r}{\epsilon}\right)^2 + 1}$) with $\epsilon$ being a constant that defaults to the approximate average distance between nodes.




As an example, consider the three points $x_1=0$, $x_1=3$ and $x_1=5$ with $p(x_1)=0.2$, $p(x_2)=0.8$ and $p(x_3)=0.1$ and choose the gaussian RBfunction\todo{looks odd} we get the following:


\begin{align}


\begin{bmatrix}


\phi(0) & \phi(3) & \phi(5) \\


\phi(3) & \phi(0) & \phi(2) \\


\phi(5) & \phi(2) & \phi(0)


\end{bmatrix}


\lambda


=


\begin{bmatrix}


1 & \num{1.23e4} & \num{1.39e11} \\


\num{1.23e4} & 1 & \num{1.83e2} \\


\num{1.39e11} & \num{1.83e2} & 1


\end{bmatrix}


\begin{bmatrix}


\lambda_1 \\


\lambda_2 \\


\lambda_3


\end{bmatrix}


=


\begin{bmatrix}


0.22\\0.8\\0.1


\end{bmatrix}


\end{align}




Solving this linear matrix equation using \texttt{numpy.linalg.solve} gives us the solution for $\lambda$:


\begin{equation}


\lambda=\begin{bmatrix}


0.200 \\0.798 \\0.085


\end{bmatrix}


\end{equation}




Combined we get the following linear combination for the interpolated function $s(x)$:


\begin{equation}


s(x)=0.200\phi(\left\x\right\)+


0.798\phi(\left\x3\right\)+


0.085\phi(\left\x5\right\)


\end{equation}








\begin{figure}[h] % also temporary


\centering


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/rbf1.pdf}


\caption{Lorem ipsum}


\label{fig:rbf1}


\end{subfigure}%


~


\begin{subfigure}[t]{0.5\textwidth}


\centering


\includegraphics[width=\linewidth]{images/rbf2.pdf}


\caption{Lorem ipsum, lorem ipsum,Lorem ipsum, lorem ipsum,Lorem ipsum}


\label{fig:rbf2}


\end{subfigure}


\caption{Caption place holder}




\end{figure}




Applying the same method to a list of random points allows to interpolate their values for arbitrary other points like the green point on the sinuslike curve in Figure \ref{fig:rbf2}. This can also be trivially extended in $m$ dimensions by replacing $x$ with an $x\in\mathbb{R}^m$ and using a norm in $\mathbb{R}^m$ for $\left\\ \right\$.




\subsection{Implementation}




The scipy function \texttt{scipy.interpolate.Rbf} allows directly interpolating a value similar to \texttt{griddata} in Section \ref{sec:griddataimplementation}. A difference in usage is that it only allows interpolating a single value, but as it is pretty quick it is possible to calculate multiple values sequentially.




\subsection{Results}




\begin{figure}[h] % TODO: h is temporary


\centering


\includegraphics[width=0.4\linewidth]{images/graphviz/general.pdf}


\caption{}


\label{fig:neuralnetworkgeneral}


\end{figure}




\section{Neural Networks}




Another method that is good at taking pairs of input and output values and then able to predict output values for arbitrary input sets is using \textit{Artificial neural networks} (\texttt{ANNs}).




\subsection{Theory}




The idea behind artificial neural networks is trying to emulate the functionality of neurons by having nodes that are connected to each others. The weights $w$ of these connections are modified during the training to represent the training data and can then be used to predict new results for input values not seen in the training data.




Every neural network needs an input layer with as many nodes as input parameters and an output layer with a node for every output value. In between there can be multiple hidden layers with an arbitrary amount of nodes. (Figure \ref{fig:neuralnetworkgeneral})




If we first only consider a single neuron, then on every iteration it calculates the sum over all input values multiplied with their weight $w$. Afterwards an activation function $g$ is applied to the sum $z$ to get the prediction $\hat{y}$.




\begin{equation}


z=\sum_{i}w_ix_i \qquad \hat{y}=g(z)


\end{equation}




The nonlinear activation function allows the network to be able to approximate all types of functions instead of being just a linear function itself. Popular activation functions are the sigmoid function $\sigma(x)={\frac {1}{1+e^{x}}}$ and the ReLU function (\textit{rectified linear unit}, $f(x)=\max(0,x)$).\footcite{NNmath}




After this first step (the \textit{feedforward}) is done, the weights can be modified by comparing the prediction with the real output (the \textit{backpropagation}). The function that describes the error between them is called the Loss function and on possible form is the mean squared error function:




\begin{equation}


L(\hat{y},y)=\sum_{i}(\hat{y}_iy_i)^2


\end{equation}




To update the weights the derivative of the Loss function with respect to the weights is calculated and added to the existing weights.\todo{more details?}\footcite{NNpython}








\subsection{Implementation}




As building a neural network from scratch gets complex very quickly, it is easier to use \texttt{Keras}\footnote{\url{https://keras.io}} which provides easy to use highlevel functions over the calculations provided by \texttt{TensorFlow}\footnote{\url{https://www.tensorflow.org/}}. To build our network, we only need to specify the structure of the layers, take our input and let the network train for 200 epochs (iterations of feedforward and backpropagation).\todo{explain more once it is finished}




\begin{lstlisting}[language=Python]


model = Sequential()


model.add(Dense(6, input_dim=6, kernel_initializer='normal', activation='relu'))


model.add(Dense(5, kernel_initializer='normal', activation='relu'))


model.add(Dense(4, kernel_initializer='normal', activation='relu'))


model.add(Dense(1, kernel_initializer='normal'))


model.compile(loss='mean_squared_error', optimizer='adam')




model.fit(X, Y, epochs=200)


\end{lstlisting}\todo{just a placeholder}






\todo{more text on how exactly the data is split up once it is decided}




After the training the resulting model can be saved in a small \texttt{HDF5} file which can be used to evaluate the model very quickly (about \SI{100}{\milli\second} for \num{10000} interpolations).






\subsection{Results}




\subsection{Issues}


\include{40_interpolations}




\appendix






\chapter{Placeholder}




\lipsum[1]\footcite{Schaefer2016}\footcite{dvorakMoon}\footcite{MaindlSummary,CollisionParameters}\footcite{Burger2018}\footcite{Dorninger}\footcite{CollisionParameters}\footcite{CollisionTypes}\footcite{RBF}




Loading…
Reference in a new issue