1
0
Fork 0
mirror of https://github.com/Findus23/guides.git synced 2024-09-19 16:03:51 +02:00

improve point cloud guide

This commit is contained in:
Lukas Winkler 2023-09-22 16:58:28 +02:00
parent 9698587eb4
commit 78fede9e1b
Signed by: lukas
GPG key ID: 54DE4D798D244853
16 changed files with 148 additions and 43 deletions

2
.gitignore vendored
View file

@ -9,3 +9,5 @@ hugo_stats.json
themes/ themes/
hugo hugo
node_modules/ node_modules/
files.zip
extra-files

View file

@ -22,3 +22,6 @@ permalinks:
guide: /:slug/ guide: /:slug/
taxonomies: taxonomies:
category: categories category: categories
#ignoreFiles:
# - \.mp4$
# - \.webm$

View file

@ -1,7 +1,7 @@
--- ---
title: "Visualizing Point Clouds" title: "Visualizing Point Clouds"
date: 2022-05-16 date: 2022-05-16
categories: categories:
- astrophysics - astrophysics
- data visualisation - data visualisation
author: Lukas Winkler author: Lukas Winkler
@ -29,7 +29,6 @@ To now draw points for all particles, we go back to the default RenderView and s
If you don't see any points, check if you have focused the RenderView and the eye symbol in the Pipeline Browser is enabled. If the particles are not centered or very small (e.g. because of accidentally plotting the particle ID as a dimension before), one can open the "Adjust Camera" menu of the RenderView and select a Standard Viewpoint. If you don't see any points, check if you have focused the RenderView and the eye symbol in the Pipeline Browser is enabled. If the particles are not centered or very small (e.g. because of accidentally plotting the particle ID as a dimension before), one can open the "Adjust Camera" menu of the RenderView and select a Standard Viewpoint.
{{<image src="cloud1.png" >}} {{<image src="cloud1.png" >}}
If your dataset has an additional column, you can color the points according to it by selecting it in the "Coloring" section. The used colormap can be edited to e.g. use logarithmic values. If your dataset has an additional column, you can color the points according to it by selecting it in the "Coloring" section. The used colormap can be edited to e.g. use logarithmic values.
@ -40,9 +39,7 @@ As a next step, reducing the opacity allows us to better see the distribution of
Still areas with high density are just white blobs and the level of detail depends on the zoom level as the size of the points is always the same. Still areas with high density are just white blobs and the level of detail depends on the zoom level as the size of the points is always the same.
{{<video src_webm="video1.webm" src_mp4="video1.mp4" loop="true" autoplay="true" >}}
{{<video src_webm="video1.webm" src_mp4="video1.mp4" loop="true" >}}
One solution to this problem is to use Point Gaussians instead of points to represent every particle. For this select "Point Gaussian" from the "Representation" drop-down. In the new "Point Gaussian" you can then change the shader preset. While "Sphere" might work with a small amount of large masses, representing them as a "Gaussian Blur" works best for large amounts of particles. One solution to this problem is to use Point Gaussians instead of points to represent every particle. For this select "Point Gaussian" from the "Representation" drop-down. In the new "Point Gaussian" you can then change the shader preset. While "Sphere" might work with a small amount of large masses, representing them as a "Gaussian Blur" works best for large amounts of particles.
@ -52,25 +49,21 @@ If you see triangle-like structures, try changing the opacity. Once again we nee
{{<image src="cloud4.png" >}} {{<image src="cloud4.png" >}}
{{<video src_webm="video2.webm" src_mp4="video2.mp4" loop="true" autoplay="true" >}}
{{<video src_webm="video2.webm" src_mp4="video2.mp4" loop="true" >}} {{<video src_webm="video3.webm" src_mp4="video3.mp4" loop="true" autoplay="true" >}}
{{<video src_webm="video3.webm" src_mp4="video3.mp4" loop="true" >}}
## In Python ## In Python
While Paraview is very powerful, often it is useful to display a subset of data automatically during data analysis without needing to export and open the dataset every time. Therefore, it is useful to create the same visualisation in Python. While Paraview is very powerful, often it is useful to display a subset of data automatically during data analysis without needing to export and open the dataset every time. Therefore, it is useful to create the same visualisation in Python.
For this we are using [vtk](https://vtk.org/) which Paraview is based on. As the [Python VTK wrapper](https://vtk.org/doc/nightly/html/md__builds_gitlab_kitware_sciviz_ci_Documentation_Doxygen_PythonWrappers.html) is not always very user-friendly, the best solution is [pyvista](pyvista.org), which abstracts many of [these details](https://docs.pyvista.org/examples/00-load/create-point-cloud.html) away. For this we are using [vtk](https://vtk.org/) which Paraview is based on. As the [Python VTK wrapper](https://vtk.org/doc/nightly/html/md__builds_gitlab_kitware_sciviz_ci_Documentation_Doxygen_PythonWrappers.html) is not always very user-friendly, the best solution is [pyvista](pyvista.org), which abstracts many of [these details](https://docs.pyvista.org/version/stable/examples/02-plot/point-clouds.html) away.
```bash ```bash
➜ pip install pyvista ➜ pip install pyvista
# if you are using python3.10, you might need to read
# https://github.com/pyvista/pyvista/discussions/2064
``` ```
Let's assume `data` is a numpy array with three columns corresponding to the position of the particles (similar to the output of `np.random.random((1000, 3))`). Let's assume `data` is a numpy array with three columns corresponding to the position of the particles (similar to the output of `np.random.random((10000, 3))`).
Then we can initialize a pyvista Plotter like this: Then we can initialize a pyvista Plotter like this:
@ -89,6 +82,7 @@ pdata = pyvista.PointSet(data)
``` ```
Then we can plot the data: Then we can plot the data:
```python ```python
pl.add_mesh( pl.add_mesh(
pdata, pdata,
@ -100,18 +94,36 @@ pl.add_mesh(
) )
pl.enable_parallel_projection() pl.enable_parallel_projection()
``` ```
Instead of specifying `style="points"` explicitly, we could also use `pl.add_points()` directly. Instead of specifying `style="points"` explicitly, we could also use `pl.add_points()` directly.
Once again `point_size` and `opacity` can be adapted to the dataset. Once again `point_size` and `opacity` can be adapted to the dataset.
Finally, we can open the window using Finally, we can open the window using
``` ```
pl.show() pl.show()
``` ```
{{<image src="pyvista1.png" >}} {{<image src="pyvista1.png" >}}
Unlike the paraview visualisation, here we for now only show each dataset as an individual dot. But just like the "Point Gaussians" setting in Paraview, we can change the visualisation to instead show every datapoint as a gaussian blur, [thanks to the pyvista developers](https://github.com/pyvista/pyvista/discussions/2576).
```python
pl.add_mesh(
pdata,
point_size=0.2,
style="points_gaussian",
render_points_as_spheres=False,
emissive=False,
opacity=0.2,
color="white",
)
```
https://docs.pyvista.org/version/stable/examples/02-plot/point-clouds.html
If we know the center of the object, we can manually specify it using If we know the center of the object, we can manually specify it using
```python ```python
pl.set_focus((halo.X, halo.Y, halo.Z)) pl.set_focus((halo.X, halo.Y, halo.Z))
``` ```
@ -125,6 +137,7 @@ pl.show_grid()
{{<image src="pyvista2.png" >}} {{<image src="pyvista2.png" >}}
And optionally the output can be rendered in 3d: And optionally the output can be rendered in 3d:
```python ```python
pl.enable_stereo_render() pl.enable_stereo_render()
# use one of the many SetStereoTypeTo*() functions # use one of the many SetStereoTypeTo*() functions
@ -132,48 +145,103 @@ pl.ren_win.SetStereoTypeToSplitViewportHorizontal()
pl.ren_win.SetStereoTypeToAnaglyph() pl.ren_win.SetStereoTypeToAnaglyph()
``` ```
A regular graphics card should be able to display millions of points in real-time without dropping any frames. If you prefer different mouse-controls where the negative z-axis always points down, you might want to enable terrain style:
```python
pl.enable_terrain_style()
```
A regular graphics card should be able to display millions of points in real-time.
{{<video src_webm="video4.webm" src_mp4="video4.mp4" loop="true" >}} {{<video src_webm="video4.webm" src_mp4="video4.mp4" loop="true" >}}
Even with 512^3 particles the visualisation is still very interactive:
The major limitation of this visualisation is that it doesn't seem possible to use Point Gaussians as above using Paraview. You can read more about this [here](https://github.com/pyvista/pyvista/discussions/2576). {{<video src_webm="video5.webm" src_mp4="video5.mp4" loop="true" >}}
### Videos in Python
While pyvista supports creating simple MP4 videos [out of the box](https://docs.pyvista.org/version/stable/examples/02-plot/movie.html), we can also do more complex automation of screenshots and camera:
```python
pl = Plotter(window_size=[800, 800])
pl.ren_win.OffScreenRenderingOn() # optional, needed if the window_size is larger than the screen resolution
pl.show(auto_close=False, interactive=False)
# orbit around center
path = pl.generate_orbital_path(n_points=250, shift=150, factor=2, viewup=[0, 0, 1])
global_i = 0
for i, point in enumerate(path.points):
pl.set_position(point, render=False)
pl.set_focus(pl.center, render=False)
pl.render()
pl.screenshot(f"/tmp/images/test_{global_i:04d}.png")
global_i += 1
start_pos = np.array(path.points[0])
# very simple zoom
end_pos = np.array([50, 50, 50])
diff_vec = end_pos - start_pos
for i in range(250):
pl.set_position(start_pos + diff_vec * i / 250, render=False)
pl.set_focus(end_pos, render=False)
pl.render()
pl.screenshot(f"/tmp/images/test_{global_i:04d}.png")
global_i += 1
for i in range(125):
pl.set_position(end_pos - diff_vec * i / 125, render=False)
pl.set_focus(end_pos, render=False)
pl.render()
pl.screenshot(f"/tmp/images/test_{global_i:04d}.png")
global_i += 1
pl.close()
```
Unfortunately a large amount of particles causes fine-grained "random" noise in the image which compresses extremly badly in videos.
{{<video src_webm="video6.webm" src_mp4="video6.mp4" loop="true" >}}
## Bonus: Exporting Videos for Publication ## Bonus: Exporting Videos for Publication
{{<video src_webm="high_res1.webm" src_mp4="high_res1.mp4" >}}
{{<video src_webm="high_res2.webm" src_mp4="high_res2.mp4" >}}
{{<video src_webm="high_res1.webm" src_mp4="high_res1.mp4" >}}
{{<video src_webm="high_res2.webm" src_mp4="high_res2.mp4" loop="true" >}}
Exporting Videos in Paraview and similar tools is rather easy, but the options are often quite limited and only support older formats and codecs. Therefore, the resulting videos can often be quite large and in low quality which is especially bad for point clouds where a lot of fine details can be lost in video compression. Exporting Videos in Paraview and similar tools is rather easy, but the options are often quite limited and only support older formats and codecs. Therefore, the resulting videos can often be quite large and in low quality which is especially bad for point clouds where a lot of fine details can be lost in video compression.
Because of this it is often useful to export the video as a collection of (losslessly compressed) PNG files and [encode the video](https://gist.github.com/Vestride/278e13915894821e1d6f) yourself. Because of this it is often useful to export the video as a collection of (losslessly compressed) PNG files and [encode the video](https://gist.github.com/Vestride/278e13915894821e1d6f) yourself.
The following options are optimized for a high quality while still having a small file size, which means they can be slow to encode. But as most videos are rather short, this should not be that important. Also, the videos on this page are compressed more to keep the loading time small while loosing only little quality. The following options are optimized for a high quality while still having a small file size, which means they can be slow to encode. But as most videos are rather short, this should not be that important. Also, the videos embedded in this page are compressed more to keep the loading time small while loosing only little quality.
### VP9 ### VP9
[VP9](https://en.wikipedia.org/wiki/VP9) is a widely supported modern video codec. [VP9](https://en.wikipedia.org/wiki/VP9) is a widely supported modern video codec.
For the best results a two-pass encoding is used with the first command saving a temporary file and the second command actually encoding the file. For the best results a two-pass encoding is used with the first command saving a temporary file and the second command actually encoding the file.
```bash ```bash
➜ ffmpeg -framerate 30 -i img.%04d.png -c:v libvpx-vp9 -pass 1 -b:v 3M -speed 4 -tile-columns 6 -frame-parallel 1 -an -pix_fmt yuv420p -f webm -y /dev/null ➜ ffmpeg -framerate 30 -i img.%04d.png -c:v libvpx-vp9 -pass 1 -crf 35 -b:v 5M -deadline good -cpu-used 0 -row-mt -tile-columns 3 -frame-parallel 1 -an -pix_fmt yuv420p -f webm -y /dev/null
``` ```
```bash ```bash
➜ ffmpeg -framerate 30 -i img.%04d.png -c:v libvpx-vp9 -pass 2 -b:v 3M -speed 1 -tile-columns 6 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -pix_fmt yuv420p -f webm -y out.webm ➜ ffmpeg -framerate 30 -i img.%04d.png -c:v libvpx-vp9 -pass 2 -crf 35 -b:v 5M -deadline good -cpu-used 0 -row-mt -tile-columns 3 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -pix_fmt yuv420p -f webm -y out.webm
``` ```
Important options: Important options:
- `-framerate`: set to the amount of frames per second - `-framerate`: set to the amount of frames per second
- `-i img.%04d.png`: this assumes the images are called `img.0001.png` - `-i img.%04d.png`: this assumes the images are called `img.0001.png`
- `-b:v 3M`: this is the main parameter to change to influence the file size and quality. It specifies the bitrate (so 3 MB/s here). Simple videos might already look great with 1 MB/s while videos with a lot of moving small parts might need a lot more here. Alternatively you can specify a CRF as explained below with `-crf` (a value around 10 might be a good start). - `-crf 35`: the Constant Rate Factor describing the quality of the output. It can be a value from 0 to 63 with lover values indicating a higher quality. In general recommended values range from 15 to 35, but for complex videos a higher value might be needed to not create huge files.
- `-b:v 5M`: Optionally, in addition to `-crf` one can also set a constraint on the maximum bitrate (so 5 Mbit/s here) to limit the file size.
- `-pix_fmt yuv420p`: [chroma subsampling](https://en.wikipedia.org/wiki/Chroma_subsampling) for compression and support in browsers - `-pix_fmt yuv420p`: [chroma subsampling](https://en.wikipedia.org/wiki/Chroma_subsampling) for compression and support in browsers
- `-y`: answer "yes" when asked to overwrite existing files - `-y`: answer "yes" when asked to overwrite existing files
- `out.webm` use WebM as a container format for the output file - `out.webm` use WebM as a container format for the output file
If the input is an existing video file instead of a list of images, replace `-framerate 30 -i img.%04d.png` with `-i input.mp4`.
You might also want to add [`-row-mt 1`](https://groups.google.com/a/webmproject.org/g/codec-devel/c/oiHjgEdii2U) for more parallelisation if you have many CPU cores. Additional options:
- `-an`: remove an audio stream from the input video
More information about the options can be found [here](https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide). More information about the options can be found [here](https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide).
@ -181,8 +249,8 @@ More information about the options can be found [here](https://sites.google.com/
Some browsers don't support VP9 yet (most notably Safari), so a fallback to MP4 might be useful as it is supported on most devices: Some browsers don't support VP9 yet (most notably Safari), so a fallback to MP4 might be useful as it is supported on most devices:
Here, instead of constraining the bitrate, we use a Constant Rate Factor (CRF) to specify the quality. Here, we also use a Constant Rate Factor (CRF) to specify the quality.
It can go from 0 to 51 with a lower value indicating a higher quality. Values around 20 might be a good starting range. It can go from 0 to 51 with a lower value indicating a higher quality. Values around 20 might be a good starting range.
```bash ```bash
➜ ffmpeg -framerate 30 -i img.%04d.png -vcodec h264 -b:v 1M -strict -2 -pix_fmt yuv420p -preset veryslow -crf 20 -movflags +faststart -y out.mp4 ➜ ffmpeg -framerate 30 -i img.%04d.png -vcodec h264 -b:v 1M -strict -2 -pix_fmt yuv420p -preset veryslow -crf 20 -movflags +faststart -y out.mp4
@ -193,7 +261,9 @@ It can go from 0 to 51 with a lower value indicating a higher quality. Values ar
You can use the `<video>` tag to embed videos in websites. You can use the `<video>` tag to embed videos in websites.
For gif-like looping videos: For gif-like looping videos:
```html ```html
<video autoplay loop muted controls playsinline> <video autoplay loop muted controls playsinline>
<source src="video.webm" type="video/webm; codecs=vp9"> <source src="video.webm" type="video/webm; codecs=vp9">
<source src="video.mp4" type="video/mp4"> <source src="video.mp4" type="video/mp4">
@ -201,7 +271,9 @@ For gif-like looping videos:
``` ```
For regular videos (only load the metadata/thumbnail until pressing play): For regular videos (only load the metadata/thumbnail until pressing play):
```html ```html
<video preload="metadata" muted controls playsinline> <video preload="metadata" muted controls playsinline>
<source src="video.webm" type="video/webm; codecs=vp9"> <source src="video.webm" type="video/webm; codecs=vp9">
<source src="video.mp4" type="video/mp4"> <source src="video.mp4" type="video/mp4">

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

After

Width:  |  Height:  |  Size: 92 KiB

View file

@ -4,10 +4,20 @@
# https://trac.ffmpeg.org/wiki/Encode/H.264 # https://trac.ffmpeg.org/wiki/Encode/H.264
# https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide # https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide
case "$1" in
VP9)
ffmpeg -framerate 30 -i /tmp/images/test_%04d.png -c:v libvpx-vp9 -pass 1 -crf 35 -b:v 15M -deadline good -cpu-used 0 -row-mt 1 -tile-columns 3 -frame-parallel 1 -an -pix_fmt yuv420p -f webm -y /dev/null
ffmpeg -framerate 30 -i img.%04d.png -c:v libvpx-vp9 -pass 1 -b:v 25M -speed 4 -tile-columns 6 -frame-parallel 1 -row-mt 1 -an -pix_fmt yuv420p -f webm -y /dev/null ffmpeg -framerate 30 -i /tmp/images/test_%04d.png -c:v libvpx-vp9 -pass 2 -crf 35 -b:v 15M -deadline good -cpu-used 0 -row-mt 1 -tile-columns 3 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -pix_fmt yuv420p -f webm -y -an out.webm
;;
AV1)
ffmpeg -framerate 30 -i /tmp/images/test_%04d.png -c:v libsvtav1 -pass 1 -crf 35 -b:v 7M -preset 8 -svtav1-params fast-decode=1 -an -pix_fmt yuv420p -f webm -y /dev/null
ffmpeg -framerate 30 -i /tmp/images/test_%04d.png -c:v libsvtav1 -pass 2 -crf 35 -b:v 7M -preset 8 -svtav1-params fast-decode=1 -pix_fmt yuv420p -f webm -y -an out-av1.webm
ffmpeg -framerate 30 -i img.%04d.png -c:v libvpx-vp9 -pass 2 -b:v 25M -speed 1 -tile-columns 6 -frame-parallel 1 -row-mt 1 -auto-alt-ref 1 -lag-in-frames 25 -pix_fmt yuv420p -f webm -y out.webm ;;
MP4)
ffmpeg -framerate 30 -i img.%04d.png -vcodec h264 -b:v 1M -strict -2 -pix_fmt yuv420p -preset veryslow -crf 20 -movflags +faststart -y out.mp4 ffmpeg -framerate 30 -i /tmp/images/test_%04d.png -vcodec h264 -b:v 15M -strict -2 -pix_fmt yuv420p -preset veryslow -crf 35 -movflags +faststart -y out.mp4
;;
*)
echo "VP9 or MP4"
esac

Binary file not shown.

Binary file not shown.

BIN
content/guide/visualizing-pointclouds/video5.mp4 (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/guide/visualizing-pointclouds/video5.webm (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/guide/visualizing-pointclouds/video6.mp4 (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/guide/visualizing-pointclouds/video6.webm (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -3,11 +3,17 @@
# https://trac.ffmpeg.org/wiki/Slideshow # https://trac.ffmpeg.org/wiki/Slideshow
# https://trac.ffmpeg.org/wiki/Encode/H.264 # https://trac.ffmpeg.org/wiki/Encode/H.264
# https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide # https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide
# https://www.webmproject.org/docs/encoder-parameters/
case "$1" in
VP9)
ffmpeg -i "$2" -c:v libvpx-vp9 -pass 1 -crf 35 -b:v 7M -deadline good -cpu-used 0 -row-mt 1 -tile-columns 3 -frame-parallel 1 -an -pix_fmt yuv420p -f webm -y /dev/null
# ffmpeg -i $1 -c:v libvpx-vp9 -pass 1 -b:v 3M -threads 8 -speed 4 -tile-columns 6 -frame-parallel 1 -an -pix_fmt yuv420p -f webm -y /dev/null ffmpeg -i "$2" -c:v libvpx-vp9 -pass 2 -crf 35 -b:v 7M -deadline good -cpu-used 0 -row-mt 1 -tile-columns 3 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -pix_fmt yuv420p -f webm -y -an out.webm
;;
MP4)
# ffmpeg -i $1 -c:v libvpx-vp9 -pass 2 -b:v 3M -threads 8 -speed 1 -tile-columns 6 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -pix_fmt yuv420p -f webm -y out.webm ffmpeg -i "$2" -vcodec h264 -strict -2 -pix_fmt yuv420p -preset veryslow -crf 30 -movflags +faststart -y -an out.mp4
;;
ffmpeg -i $1 -vcodec h264 -b:v 1M -strict -2 -pix_fmt yuv420p -preset veryslow -crf 30 -movflags +faststart -y out.mp4 *)
echo "VP9 or MP4"
esac

View file

@ -1,5 +1,5 @@
<figure> <figure>
<video {{if .Get "loop"}} autoplay loop {{else}} preload="metadata"{{ end }}muted controls playsinline> <video {{if .Get "loop"}}loop{{else}}{{ end }} {{if .Get "autoplay"}}autoplay preload="metadata"{{else}}{{ end }}muted controls playsinline>
{{ with .Page.Resources.GetMatch (printf "*%s*" (.Get "src_webm")) }} {{ with .Page.Resources.GetMatch (printf "*%s*" (.Get "src_webm")) }}
<source src="{{ .Permalink }}" type="video/webm; codecs=vp9"> <source src="{{ .Permalink }}" type="video/webm; codecs=vp9">
{{ end }} {{ end }}