The following video demonstrates the visual experience of the procedurally generated sky and the clouds. It contains multiple views on the scene for slightly different conditions - e.g. different times of the day. During near settings of the camera, it is noticeable that the clouds are revolving. This effect was adjusted to only have slight influence on the behaviour of the clouds in order to make the hole scene as realisitc as possible.
Procedural Cloud Generation
Freitag, 28. November 2014
Mittwoch, 26. November 2014
Perceptional Study
It has been empirically shown that the current weather conditions directly influence the mood of people in clinical studies, psychological experiments and long-term observations. Although, the significance of the effect was limited in some studies, the correlation between mood and weather is part of one’s daily life. Furthermore, the overall disposition and mood of a person impact the visual processing as Bassoa et al have shown. In the context of virtual reality, it would be interesting to see whether a sunny sky is perceived differently than gloomy conditions.
Can the weather in a virtual environment noticeably affect
or lift the mood of people? Does the perception of a viewer diverge even though
the same scene is shown but only with different weather conditions and does
this perceptional transition involve a change in the mood? To validate the
assumption that virtual reality has similar effects on one’s mind than actual
weather does, the previously explained implications are validated in reverse
order. First, the experiments will try to manipulate the perception of the
viewer in order to then bias the mood. Altogether, this study strives to
validate that the previously created procedural sky evokes
similar reactions in people then real weather would.
Design
To clarify the planned experiment, I will outline the
contents in the following sections and describe the expectations, the
population, the procedure and the metrics of the perceptional study.
The previously created procedural sky has to be adapted to
integrate a picture showing any scene as displayed in Figure 3. Therefore, a texture
of the scene shown in Figure 1 is mapped onto a plane created inside the
skydome. The real sky is extracted from the picture producing an alpha map displayed
in Figure 2 which is used in a shader program to isolate the relevant parts
without the real sky from the texture. The saturation, contrast and brightness
of the scene have to be adapted according to the illumination of the procedural
sky.
Figure 1 Scene of a city used for the perceptional study |
Figure 2 Alpha map for the part of the scene shown in Figure 1 that is used in the perceptional study |
Hypothesis
Different conditions for clouds, the sky and the position of
the sun are perceived differently by observers and thus, evoke different
emotions or transitions in their mood. The correlation between the weather
conditions and the mood is expected to be minor but yet present.
Target group
The target group of many perceptional experiments where
students. To show that virtual reality triggers similar emotions in the
subjects as the real weather a similar target group has to be chosen to be able
to compare the results to existing studies.
Procedure
First, the subjects are assigned to one of three groups.
Each of those groups will see exactly the same footage but answer the questions
described in a latter section in a different order. There are supposed to be
two videos lasting at least 5 minutes to have an effect on the viewers. The
first video shows the scene with the procedurally generated sky in a sunny and
overall cheerful setting which is supposed to give the subjects a joyous
disposition. The second video captures the same scene but with a gloomy, stormy
and cloudy sky. All participants are asked to envision what they would do in the
scene they are shown. The first group acts as the reference group and answers
questions on their mood before seeing any of the scenes. The second group is
asked to fill out a questionnaire on their current mood after they have seen
the video with the sunny sky and the third lot does the same after the dark
setting. All participants are asked to answer questions covering their
perception of the scene after each video.
Metrics
The following generic questions can be used to investigate
the disposition of participants towards the scene. The order of the questions
is important to diminish biased answers for follow-up inquiries.
Questions related to the mood
- How would you describe your current mood on a scale from 0 (worst) to 10 (best)
- What do you think is your general mood on average on a scale from 0 (worst) to 10 (best)?
- How was your mood while envisioning to be in the scene shown before on a scale from 0 (worst) to 10 (best)?
Questions related to the perception
- In which month could the previously seen scene have taken place?
- Which outside temperature do you think there was?
- How realistic did the scene appear to you on a scale from 0 (most unrealistic) to 10 (highly realistic)?
Material
In this experiment the subjects see a video captured from
the procedural sky that was previously created. Additionally, a scene is
embedded and the real sky is replaced with the virtual one. The following
pictures are excerpts of a possible demonstration to illustrate the material.
The pictures to the left show a more sunny scene whereas the other side could
be perceived as gloomy and cloudy.
Figure 3 Different weather conditions for the skyline of a city |
Expectations
- The group with the on average lowest measured mood should be the one that saw the dark scene, followed by the reference group that answered the questions on their mood before the experiment. The group with the overall highest results in the metrics for mood should be the one that saw the video with the cheerful setting before answering the questions. These transitions can be ascribed to the impact that the virtual reality has on the participants.
- The darker scene should be perceived as a cold setting in autumn and the brighter scene should be matched with spring or summer.
References
- Denissen, J.J.A.; Butalid, Ligaya; Penke, Lars; van Aken, Marcel A. G. (2008). The effects of weather on daily mood: A multilevel approach. Emotion, 8(5), 662-667
- Howarth, E. & Hoffman, M.S. (1984). A multidimensional approach to the relationship between mood and weather. British Journal of Psychology, 75(1), 15-23
- Sanders, J.L. & Brizzolara, M.S. (1982). Relationships between weather and mood. Journal of General Psychology, 107(1), 155-156
- Bassoa, M.R. et al (1996). Mood and global-local visual processing. Journal of the International Neuropsychological Society, 2(3), 249-255
Dienstag, 18. November 2014
Procedural Sky
In this post, the previous work on the cloud generation and
the skydome is combined into a single scene. Therefore, the illumination model
is slightly adapted to match the brightness of the skydome and issues with the
positioning of clouds inside the scene are treated. Eventually, the advantages
and the disadvantages of the approach are discussed and briefly outlined.
Clouds in the skydome
As opposed to previous posts, the creation of the scene
needs some more consideration because the positioning of the elements is
important. First, the background – i.e. the skydome – is drawn and then the
instanced cloud objects. To place a cloud inside the scene, a plane is
constructed in a specified height over the horizon (y-coordinate is zero). Then
a cloud is spawned randomly on a position outside the skydome, floats through
the scene, and vanishes again outside of the skydome. Clouds must not be instantiated
if the frame rate is too low (lower than 30 FPS) or within a predefined
timespan after the last cloud was created (this timespan controls the density
of clouds on the sky plane). The first result of this process can be seen in
Figure 1.
Figure 1 cloud plane and alpha blending problem |
Obviously, applying the depth test in OpenGL is not enough
to get a correct blending of overlaying clouds in the faded outer area.
Instead, each cloud can only overlap with areas of the scene that were already
drawn. Thus, the objects have to be sorted before passing the buffers and
parameters to the shaders. In order to get a performant sorting procedure, the
list of offsets is linearly compared with the location of the camera. The cloud
that is closest to the camera has to be drawn last as the faded area has to
blend over clouds behind it. The following algorithm is roughly adapted from
the bubble sort algorithm with the exception of running only once through the
list. This sorting function is called for each frame and therefore creates the
correct order before any cloud enters the scene.
for each cloudif (distanceToCam(cloud) < distanceToCam(lastCloud))
swap cloud and lastCloud in draw buffers
The algorithm works nicely under the assumption that the
camera cannot move fast enough through the scene to completely turn the sorted
list of clouds upside down. During all tests, this approach saved heavy
calculations (full sorting cannot be accomplished faster than O(n logn) where n
is the length of the list).
Incorporating wind
As clouds are distorted along the prevailing direction of
the wind, the underlying spheres are stretched in the same direction.
vec3 stretchVector = expansionDirection + vec3(1,1,1);vec3 strechedPosition = scale * vertexPosition_modelspace * stretchVector + offset;
Also the positions of the clouds are updated according to
the wind direction by adding to the offsets.
offsets.at(i) += expansionDirection * FLOATING_SPEED_PER_SECOND * passedSeconds;
Adapting the illumination
In addition to the illumination of a cloud based on the
Phong illumination model, the saturation and brightness depend on both, the
intensity of the sun and the current weather condition. Naturally, clouds are
brighter the more intense the sun is shining – i.e. the higher the sun on the
skydome the bigger the angle of entrance of rays the brighter the cloud.
Therefore, the intensity of the sun, as calculated from the height on the
skydome, determines the influence of the diffuse illumination component
(between zero and one) and to some extent even the ambient illumination
(between a constant minimum and one). A correlation between the lights
intensity and the impact of the illumination components seemed to be best
described by a radicular function. Figure
2, Figure 3 and Figure 4 show the resulting illumination on clouds for
different times of a day.
Figure 2 clouds during sunrise |
Figure 3 clouds during lunchtime |
Figure 4 clouds during the night |
Weather conditions
It is possible to generate different weather conditions with
the resulting procedural program. Some variations can for instance be obtained
thus:
- Partly clouded: bright illumination, high timespan for the creation of consecutive clouds, small values for scaling individual clouds
- Stormy: higher update of offsets, high and quickly changing noise of the surface
- Dull sky: small timespan for the creation of consecutive clouds, gloomy illumination (dark ambient component), high values for scaling individual clouds
Advantages of the approach
+ Efficient but yet simple illumination techniques
+ Easy to simulate different weather conditions by adapting
(uniform) values such as the wind direction, the light intensity, the timespan
between instantiating two clouds, etc
+ Easy to combine multiple primitives into a more complex
cloud by stacking spheres together
+ Dynamic model that allows for animated surfaces by
changing the noise-based displacement over time
+ A single cloud can be reused (instanced drawing)
Disadvantages of the approach
- Performance might be lower than for static
texture-oriented drawing approaches
- Only applicable for few cloud types (e.g. only for cumulus
or cirrocumulus clouds)
Outline
The overall aim of the project was to procedurally generate
a cloud from a small set of input parameters. The final implementation is able
to render a scene with clouds and a sky that highly resemble natural appearances
taking only the current time of the day, the wind direction, the wind intensity
and the timespan between creating two consecutive clouds as input. Although, it
does not achieve the overall performance of a static approach where textures are
created and blended, it achieves noticeable results in the tests. Furthermore,
the model is fully dynamic and is able to animate the surface of clouds in real-time. The final version of the procedural cloud generator can be downloaded from GitHub.
References
Skydome
Since the project specification requires a full scene with clouds placed in the sky the next step is to procedurally generate the sky. To achieve this, different approaches can be chosen. As spheres were previously successfully generated, I preferred a skydome over other concepts – i.e. a cube with a texture mapped on each face or a planar texture relative to the observer’s point of view. The basic idea of a skydome is to pull a hemisphere over the scene. This can easily be achieved by removing half of a sphere (for a more detailed description on the spheres see the first post on generating a single cloud).
Simulating time
The color of the sky changes throughout the day. In this
section, the effect is modeled by illuminating the hemisphere. Therefore, two
different colors were defined, the zenith color and the horizon color.
Hereafter, the color is interpolated between those two values depending on the
mapped y-coordinate. Independent of the top color at the zenith of the
hemisphere, the color of the horizon changes to from red in the morning hours
to white during the day back to red during sunset to black through the night.
The interpolation along the y-axis is done with the radicular influence of the
light intensity which is zero at sunset and sunrise and 1 for the highest
position of the sun. Figure 1, Figure 2 and Figure 3 show distinguished results
during from the day-night-cycle.
Figure 1 skydome during lunchtime with white horizon |
Figure 2 skydome during night with black horizon |
Figure 3 skydome during sunset / sunrise with red horizon |
Light
Additional to the ambient illumination of the surface of the
sphere, a strong specular component is incorporated to represent the sun.
Depending on the intensity of the sun (cf. description above), the position is
calculated on a curved trajectory on the hemisphere. The intensity also
determines the impact of the specular sun component. Thus, the sun fades near
sunset. Figure 4, Figure 5 and Figure 6 display the combined result of the
skydome and the integrated sun.
Figure 4 sun during lunchtime near the zenith |
Figure 5 sun during afternoon |
Figure 6 sunset with fading intensity near the horizon |
References
Abonnieren
Posts (Atom)