In this post, the previous work on the cloud generation and
the skydome is combined into a single scene. Therefore, the illumination model
is slightly adapted to match the brightness of the skydome and issues with the
positioning of clouds inside the scene are treated. Eventually, the advantages
and the disadvantages of the approach are discussed and briefly outlined.
Clouds in the skydome
As opposed to previous posts, the creation of the scene
needs some more consideration because the positioning of the elements is
important. First, the background – i.e. the skydome – is drawn and then the
instanced cloud objects. To place a cloud inside the scene, a plane is
constructed in a specified height over the horizon (y-coordinate is zero). Then
a cloud is spawned randomly on a position outside the skydome, floats through
the scene, and vanishes again outside of the skydome. Clouds must not be instantiated
if the frame rate is too low (lower than 30 FPS) or within a predefined
timespan after the last cloud was created (this timespan controls the density
of clouds on the sky plane). The first result of this process can be seen in
Figure 1.
Figure 1 cloud plane and alpha blending problem |
Obviously, applying the depth test in OpenGL is not enough
to get a correct blending of overlaying clouds in the faded outer area.
Instead, each cloud can only overlap with areas of the scene that were already
drawn. Thus, the objects have to be sorted before passing the buffers and
parameters to the shaders. In order to get a performant sorting procedure, the
list of offsets is linearly compared with the location of the camera. The cloud
that is closest to the camera has to be drawn last as the faded area has to
blend over clouds behind it. The following algorithm is roughly adapted from
the bubble sort algorithm with the exception of running only once through the
list. This sorting function is called for each frame and therefore creates the
correct order before any cloud enters the scene.
for each cloudif (distanceToCam(cloud) < distanceToCam(lastCloud))
swap cloud and lastCloud in draw buffers
The algorithm works nicely under the assumption that the
camera cannot move fast enough through the scene to completely turn the sorted
list of clouds upside down. During all tests, this approach saved heavy
calculations (full sorting cannot be accomplished faster than O(n logn) where n
is the length of the list).
Incorporating wind
As clouds are distorted along the prevailing direction of
the wind, the underlying spheres are stretched in the same direction.
vec3 stretchVector = expansionDirection + vec3(1,1,1);vec3 strechedPosition = scale * vertexPosition_modelspace * stretchVector + offset;
Also the positions of the clouds are updated according to
the wind direction by adding to the offsets.
offsets.at(i) += expansionDirection * FLOATING_SPEED_PER_SECOND * passedSeconds;
Adapting the illumination
In addition to the illumination of a cloud based on the
Phong illumination model, the saturation and brightness depend on both, the
intensity of the sun and the current weather condition. Naturally, clouds are
brighter the more intense the sun is shining – i.e. the higher the sun on the
skydome the bigger the angle of entrance of rays the brighter the cloud.
Therefore, the intensity of the sun, as calculated from the height on the
skydome, determines the influence of the diffuse illumination component
(between zero and one) and to some extent even the ambient illumination
(between a constant minimum and one). A correlation between the lights
intensity and the impact of the illumination components seemed to be best
described by a radicular function. Figure
2, Figure 3 and Figure 4 show the resulting illumination on clouds for
different times of a day.
Figure 2 clouds during sunrise |
Figure 3 clouds during lunchtime |
Figure 4 clouds during the night |
Weather conditions
It is possible to generate different weather conditions with
the resulting procedural program. Some variations can for instance be obtained
thus:
- Partly clouded: bright illumination, high timespan for the creation of consecutive clouds, small values for scaling individual clouds
- Stormy: higher update of offsets, high and quickly changing noise of the surface
- Dull sky: small timespan for the creation of consecutive clouds, gloomy illumination (dark ambient component), high values for scaling individual clouds
Advantages of the approach
+ Efficient but yet simple illumination techniques
+ Easy to simulate different weather conditions by adapting
(uniform) values such as the wind direction, the light intensity, the timespan
between instantiating two clouds, etc
+ Easy to combine multiple primitives into a more complex
cloud by stacking spheres together
+ Dynamic model that allows for animated surfaces by
changing the noise-based displacement over time
+ A single cloud can be reused (instanced drawing)
Disadvantages of the approach
- Performance might be lower than for static
texture-oriented drawing approaches
- Only applicable for few cloud types (e.g. only for cumulus
or cirrocumulus clouds)
Outline
The overall aim of the project was to procedurally generate
a cloud from a small set of input parameters. The final implementation is able
to render a scene with clouds and a sky that highly resemble natural appearances
taking only the current time of the day, the wind direction, the wind intensity
and the timespan between creating two consecutive clouds as input. Although, it
does not achieve the overall performance of a static approach where textures are
created and blended, it achieves noticeable results in the tests. Furthermore,
the model is fully dynamic and is able to animate the surface of clouds in real-time. The final version of the procedural cloud generator can be downloaded from GitHub.
Keine Kommentare:
Kommentar veröffentlichen