This is my first video game made with Unreal Engine. The goal is simple, similar to the original Slender Man game (http://www.parsecproductions.net/slender/), you have to collect 7 cookies throughout the level to escape. If Kawai Slender catches you it's game over!
Assets were made with Blender, characters were made with MakeHuman. Follow the link bellow to download the video game, enjoy !
This project was done for the class of IN55 at the university of Belfort-Montbéliard (UTBM, http://www.utbm.fr/en/utbm.html). Our objective was to create a computer
rendered realistic simulation of water in different chosen environments
(sea, rivers, etc…). The render was obtained thanks to the open source
graphics library: OpenGL (https://www.khronos.org/opengl/)
and the GLSL shaders. The project was implemented in the Visual Studio C++ IDE
(Integrated Development Environment) with the help of the
Qt library (http://www.qt.io/).
This program was developed by a team of four, each person had predefined work
attributed to him after careful discussion between the team:
Bertrand Rix
Sofiane Arbaoui
Cédric Boittin
Jonathan Siret
My role was to manage lights and shadow effects in the scene, and setup the
deferred rendering loop of the project, needed to fuse the different effects of
the scene.
Project video:
I. Lights and Shadows:
Since we are using outdoor environments, to achieve a realistic outdoor lighting, one
of the sources to light the scene should be the sun. And to make the ocean
scene a bit less isolated we’ve added a lighthouse which projects a spot.
Two types of lighting were then implemented: directional lighting and spot lighting. The
first one is used to simulate light coming from the sun, it’s represented by parallel
rays of light. The second was created for the lighthouse, it’s
composed of light rays emitted from a source and limited to a 3D cone
(direction and angle of the opening). Each light follows the Phong illumination
algorithm (http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/).
The following images are screenshots to illustrate:
The directional light:
The spot light:
To complete the realism of the lighting, each lighted object should project a shadow. The most
widely used algorithm is shadow mapping. For each light source, we place the
camera at its position and look in its direction. Each object is
then projected into the camera and the projection is saved in a shadow map, meaning an image. Finally, when rendering the scene we check if part of the current
rendered object is not obstructed by another by checking the shadow map, if so
the object is darkened to simulate the shadow.
The directional light projects shadows based on its parallel rays of light, schematized below:
A screenshot example of the result in the scene:
The spot light projects a shadow based on its position in space and its direction, as follows:
For example, the shadow projected by the teacup
illuminated by the lighthouse. (The sun directional light is disabled to illustrate
this example):
II. PSSM and other optimizations:
One of the
problems encountered with the shadow projection algorithm is the quality of the
shadows. We can observe steps in the edges of the shadow, this effect is
called aliasing:
This is one of
the limits of shadow mapping: the projected shadows are heavily dependent on
the resolution of the image of the shadow map. Since an image is made up of
pixels, the projection is saved in a non continuous space and we lose
information, such as knowing if part of an object is shadowed when its
projection is between two pixels of the map. This wouldn’t be such a problem if
the light source is close to the object that projects the shadow, but since we’re
talking about the sun, the aliasing phenomenon is multiplied.
One solution
would be to increase the resolution of the image, but the aliasing effect would
still exist and increasing the resolution would require more GPU memory.
To improve the
shadows projected by the sun, we’ve chosen to implement the PSSM algorithm.
Parallel-Split Shadow Maps, is a shadow projection algorithm which consists in
dividing the view frustum (or camera frustum) in different sections called
splits and generates a different shadow map for each of those sections (similar
principal to the cascading shadow maps algorithm). With PSSM we can obtain
different qualities of shadow based on the distance between the shadow and the
camera viewing the scene. In our case, the farther the shadow is from the
camera, the worse the quality of the shadow (or shadow maps).
In this project,
this technique was implemented to improve the shadows projected by the directional
light, the sun.
Bellow, the
differences between standard shadow mapping and PSSM with shadows projected from
the top of the island and lighthouse present in the scene:
The top of the mountain and of the lighthouse projected on the water. Computed with standard shadow mapping.
The top of the mountain and of the lighthouse projected on the water. Computed with Parrallel Split Shadow Maps.
The two above images are the results of the standard shadow mapping, the bottom ones are
issued from PSSM. We can observe a net improvement in the projected shadows,
since these shadows are close to the camera.
Unfortunately the implementation of the PSSM is incomplete.
Unstable, the shadows were missing in different viewpoints of the scene and
thus abandoned before the project was handed.
Another possible solution to aliasing would be to soften the shadows by blurring the shadow map. However a standard blur with a parameterized kernel size would create a
globally equivalent blurred image, it would, for example, ignore the distance
between the shadow and its projector, creating an unrealistic result. To obtain
realistic results, algorithms exist such as: PCF for percentage-closer filtering,
or PCSS for percentage-closer soft shadows. They create shadows based on the
distance from the casting object and effectively reduce shadow aliasing (http://www.geforce.com/hardware/technology/dx11/technology).
III. Rendering loop, or pipeline
A. Deferred shading
In the beginning of the project, we agreed on a rendering pipeline
using the deferred shading technique. This technique is done in two passes. The
first one consists in rendering the geometry of the scene and saving in a buffer
all the information useful for the illumination of the scene. This buffer is
called the GBuffer and is composed of different textures containing for example
the colors and materials (ambient, diffuse, specular) and the surface normals
of each object of the scene.
The second phase is illumination. It’s realized in a shader which will compute the lighting of the scene thanks to the information contained in the GBuffer, and load the result in a texture.
Here no geometry is computed, only a quad is rendered as a support for the shader.
This technique has the advantage of allowing a great number of lights in the scene while minimizing the loss in performances. Adding post-processing effects on top of the current rendering result is also simplified.
B. Articulation of the different rendering passes
The project was mainly focused on water rendering, the different passes are almost all linked to the
many effects that this render needs.
We’re heavily dependent on frame buffer objects (FBO) to save in different textures
previously attached to an FBO, the results of each passes.
The first pass is linked to shadows, the technique we’ve been using is shadow mapping, which was
previously explained. This pass generates shadow maps used later in the
rendering pipeline to compute shadow projection.
The second pass builds the water reflections. We render our scene in particular point of view into a
texture. The result is a reflexion map used afterwards to account for the
reflections of the world on the water.
The third pass generates the refraction map to compute the refraction effect. It’s to be noted
that the illumination of the reflections and refractions are not computed by
deferred shading, but by a more classic method of forward shading, to reduce
complexity.
And ultimately, the final pass is called the full rendering pass, renders all the objects in the scene (including the water, which at this point was still not rendered) and saves all the attributes in the GBuffer. This GBuffer is passed to a shader which computes the lighting using the Phong method.
Conclusion
Nearing the end of development, we were globally satisfied with the result
obtained from the presented techniques. The use of heightmaps for water
rendering was simple and fast, but with hindsight this technique was more
adapted to small surfaces of water. Even if the effect is reduced by increasing
the complexity of the combined heightmaps and by playing with the deformation
coefficients, we can still observe repeated patterns in the wave animations.
The ocean rendering visible in great distances and mostly its
animation should be computed with more complex techniques but in our case the
rendering is relatively convincing. The different lighting effects like reflections
were interesting to implement but complicated to parameterize to obtain a
correct result.
And to conclude, all of the work done on the shadows revealed a lot
of problems that can come up from the standard shadow mapping. The attempt at
implementing the PSSM technique was unsuccessful, but it confronted us to the problem and pushed us to find solutions to the flaws that are inherent to the classic technique.
Performance:
The project was executed on two machines to compute the FPS. At the time of the project: a recent machine (2012) and a two year old machine. The results are as followed: