mirror of
https://github.com/Findus23/BachelorsThesis.git
synced 20240827 19:52:12 +02:00
comparison
This commit is contained in:
parent
42fd8ec57c
commit
b487c8f96b
3 changed files with 25 additions and 1 deletions

@ 67,6 +67,7 @@ After the simulation the properties of the SPH particles needs to be analyzed. T


% This way, the mass retention (total mass of the two largest fragments compared to total mass of projectile and trget) and the water retention can be determined for every simulation result.




\section{Resimulation}


\label{sec:resimulation}




To increase the amount of available data and especially reduce the errors caused by the gridbased parameter choices (Table \ref{tab:first_simulation_parameters}) a second simulation run has been started. All source code and initial parameters have been left the same apart from the six main input parameters described above. These are set to a random value in the range listed in Table \ref{tab:resimulationparameters} apart from the initial water fractions. As they seem to have little impact on the outcome (see Section \ref{sec:cov}) they are set to \SI{15}{\percent} to simplify the parameter space.







@ 2,4 +2,6 @@




\input{41_griddata.tex}


\input{42_rbf.tex}


\input{43_nn.tex}


\input{43_nn.tex}




\input{44_comparison.tex}

21
44_comparison.tex
Normal file
21
44_comparison.tex
Normal file

@ 0,0 +1,21 @@


% !TeX spellcheck = en_US


\section{Comparison}




To compare the three methods explained above and measure their accuracy an additional set of 100 simulations (with the same properties as the ones listed in Section \ref{sec:resimulation}). These results are neither used to train or select the neural network, nor are in the dataset for griddata and RBF interpolation. Therefore we can use them to generate predictions for their parameters and compare them with the real fraction of water that remained in those simulations. By taking the mean absolute difference or the mean squared error between the predictions and the real result the accuracy of the different methods can be estimated (Table \ref{tab:comparison}). As one of these parameter sets is outside the convex hull of the training data and griddata can't extrapolate, this simulation is skipped and only the remaining 99 simulations are considered for the griddata accuracy calculation.




Of the three methods, the trained neural network has the highest mean squared error. This seems to be\todo{more interpretations}




Another important aspect to compare is the interpolation speed. The neural network is able to give the 100 results in about \SI{4}{\milli\second} (after loading the trained model). RBF interpolation is still reasonably fast taking about \SI{8.5}{\second} (\SI{85}{\milli\second} per interpolation). But as \texttt{griddata} expects a gridbased parameter space, it becomes really slow when adding the resimulation data with random parameters. A single interpolation takes about \SI{35}{\second} totaling to around an hour for all 99 test cases. Using only the original dataset brings the runtime down to around \SI{10}{\second}, but causes the results to be less accurate than all other methods.




\begin{table}


\centering


\begin{tabular}{rcc}


& {mean squared error} & {mean error} \\


griddata (only original data) & 0.014 & 0.070 \\


neural network & 0.010 & 0.069 \\


RBF & 0.008 & 0.057 \\


griddata & 0.005 & 0.046


\end{tabular}


\caption{prediction accuracy for the different interpolation methods}


\label{tab:comparison}


\end{table}

Loading…
Reference in a new issue