mirror of
https://github.com/Findus23/BachelorsThesis.git
synced 20240728 19:42:36 +02:00
move comparison to conclusion
This commit is contained in:
parent
30176331b6
commit
07e9742ce5
3 changed files with 22 additions and 25 deletions

@ 4,4 +4,3 @@


\input{42_rbf.tex}


\input{43_nn.tex}




\input{44_comparison.tex}


@ 1,23 +0,0 @@


% !TeX spellcheck = en_US


\section{Comparison}


\label{sec:comparison}


\todo{To comparison}


To compare the three methods explained above and measure their accuracy an additional set of 100 simulations (with the same properties as the ones listed in Section \ref{sec:resimulation}) was created. These results are neither used to train or select the neural network, nor are in the dataset for griddata and RBF interpolation. Therefore, we can use them to generate predictions for their parameters and compare them with the real fraction of water that remained in those simulations. By taking the mean absolute difference and the mean squared error between the predictions and the real result, the accuracy of the different methods can be estimated (Table \ref{tab:comparison}). As one of these parameter sets is outside the convex hull of the training data and griddata can't extrapolate, this simulation is skipped and only the remaining 99 simulations are considered for the griddata accuracy calculation.




Of the three methods, the trained neural network has the highest mean squared error. This seems to be at least partly caused by the fact that during training the neural network, the data is generalized, causing the final network to output the \enquote{smoothest} interpolations. While this causes the errors to be higher, it might be possible that the fine structured details in the simulation output is just a artifact of the simulation setup and doesn't represent real world collisions.


\todo{better wording}




Another important aspect to compare is the interpolation speed. The neural network is able to give the 100 results in about \SI{4}{\milli\second} (after loading the trained model). RBF interpolation is still reasonably fast, taking about \SI{8.5}{\second} (\SI{85}{\milli\second} per interpolation). But as \texttt{griddata} expects a gridbased parameter space, it becomes really slow when adding the resimulation data with random parameters. A single interpolation takes about \SI{35}{\second} totaling to around an hour for all 99 test cases. Using only the original dataset brings the runtime down to around \SI{10}{\second}, but causes the results to be less accurate than all other methods. (first row in Table \ref{tab:comparison})




\begin{table}[h]


\centering


\begin{tabular}{rcc}


& {mean squared error} & {mean error} \\


griddata (only original data) & 0.014 & 0.070 \\


neural network & 0.010 & 0.069 \\


RBF & 0.008 & 0.057 \\


griddata & 0.005 & 0.046


\end{tabular}


\caption{Prediction accuracy for the different interpolation methods}


\label{tab:comparison}


\end{table}


@ 1,2 +1,23 @@


\chapter{Conclusion}


% !TeX spellcheck = en_US


\chapter{Comparison and Conclusion}


\label{sec:comparison}




To compare the three methods explained above and measure their accuracy an additional set of 100 simulations (with the same properties as the ones listed in Section \ref{sec:resimulation}) was created. These results are neither used to train or select the neural network, nor are in the dataset for griddata and RBF interpolation. Therefore, we can use them to generate predictions for their parameters and compare them with the real fraction of water that remained in those simulations. By taking the mean absolute difference and the mean squared error between the predictions and the real result, the accuracy of the different methods can be estimated (Table \ref{tab:comparison}). As one of these parameter sets is outside the convex hull of the training data and griddata can't extrapolate, this simulation is skipped and only the remaining 99 simulations are considered for the griddata accuracy calculation.




Of the three methods, the trained neural network has the highest mean squared error. This seems to be at least partly caused by the fact that during training the neural network, the data is generalized, causing the final network to output the \enquote{smoothest} interpolations. While this causes the errors to be higher, it might be possible that the fine structured details in the simulation output is just a artifact of the simulation setup and doesn't represent real world collisions.


\todo{better wording}




Another important aspect to compare is the interpolation speed. The neural network is able to give the 100 results in about \SI{4}{\milli\second} (after loading the trained model). RBF interpolation is still reasonably fast, taking about \SI{8.5}{\second} (\SI{85}{\milli\second} per interpolation). But as \texttt{griddata} expects a gridbased parameter space, it becomes really slow when adding the resimulation data with random parameters. A single interpolation takes about \SI{35}{\second} totaling to around an hour for all 99 test cases. Using only the original dataset brings the runtime down to around \SI{10}{\second}, but causes the results to be less accurate than all other methods. (first row in Table \ref{tab:comparison})




\begin{table}[h]


\centering


\begin{tabular}{rcc}


& {mean squared error} & {mean error} \\


griddata (only original data) & 0.014 & 0.070 \\


neural network & 0.010 & 0.069 \\


RBF & 0.008 & 0.057 \\


griddata & 0.005 & 0.046


\end{tabular}


\caption{Prediction accuracy for the different interpolation methods}


\label{tab:comparison}


\end{table}

Loading…
Reference in a new issue