GPU Marching Cubes in webGL

GPU Marching Cubes in webGL

21, Aug 2014/Categories 3D, JavaScript, webGL/No Comments

GPU Marching cubes (GPUMC)

THIS POST HAD MANY ISSUES THAT I HAVE TO FIX IN THE CODE, as soon as I can fix them I´ll put the text back with the corrections :(

Fluid Simulation with SPH (Smoothed particle hydrodynamics) in WebGL

Fluid Simulation with SPH (Smoothed particle hydrodynamics) in WebGL

29, Aug 2011/Categories 3D, webGL/8 Comments

VIEW THE EXAMPLE

The next video is recorded on real time with an AMD Radeon 6970M HD (2Gb), feel free to watch the video if the simulation does not run in your computer.

Some links to start

When I was reading information about shadows particles for the previous posts, I saw one demo that really called my attention, this demo used one technique for fluids simulations that I really wanted to see how it worked. If you have seen the demo you will probably know this blog, here I found that the fluids where made using SPH (Smoothed particles hidrodinamics) for the simulation, so now I had a place to start searching for information. What I found was some good links that explain very well the maths behind the system.

If you want to understand how the simulation works you should read the paper from Matthias Müller, it explains the basics of fluid simulations using particles to calculate the properties of the fluid. You can also read this paper from Harada, this last paper is best suited for a GPU implementation.

There are two more links you should reed, Micky Kelager also explains the SPH, but the most important here is that you find how to deal with the collisions in the simulation using distance fields and he also defines some ways to calculate some important coeficients for the simulation. Another good link is the post from Iñigo Quilez, he defines some useful analytics distance fields that we can use to create collisions objects in the simulation.

Finally the GPU GEMS 3 has a good chapter for rigid body simulations that explains very well the creation of uniforms grids to find the neighbors of one.

The implementation

The process have some few steps explained in the next image, the most important of then are the neighborhood search and the velocity update. I do not explain too much in here because the previous links explain much better than me the implementation of SPH in a GPU.

Neighborhood search & Velocity update:

One of the most important thins in the simulation is to find the particles around the fragment particle, because they are responsible of the forces that will be applied to the simulation. We implemented a uniform grid based on the paper from Harada. This grid is a 3D voxel partition of the space that allows to keep track of up to four particles located in the voxel. A closer look of the algorithm is presented in the next image (taken from GPU GEMS 3).

The velocity update needs that the collision information among the colliding objects and the particles so you have to implement it in this step, what we did is to create spring forces based on the paper of Kelager using distance fields, one signed box, defined in the post from Iñigo.

The coeficients

Using the SPH to simulate fluids require to define some coeficients that alter he properties of the fluid. These are:

gridSize: Change the quantity of the voxels used for the uniform grid, if you make a bigger gridSize you will have more definition for the simulation, but the overall performance will be hurt. If you change this parameter also change the quantity of particles for the simulation.

volume: The volume defines the total space that the particles will use in a rest state, it is also used to define the mass for the particle, so this means that if you change the total rest volume, you will be changing the also the mass of the particle.

pressureK: this variable change the strength of the pressure from each particle, this variable is important because it avoids particles from collapsing, on the other hand, if you make it too strong, the particle will repel each other too much and the simulation will be unstable.

viscosity: the viscosity is a damping factor among the velocities of the particles, it keep particles moving smoothly relative to each other.

maxSearchRatio: this is perhaps the most delicate variable in the simulation, it defines the maximum kernel smoothing search ratio for the weigh kernells. If this value is too slow, there will be no particles to interact for the fragment particle, so there will be no simulation, but in the other hand, if the ratio is to big, the forces wil not be smoothed properly, and there will be some instabilities in the simulation.

restitution: this value is responsible for the strength of the collision forces, this variable acts as a spring force for the boundaries, this way you can control how the particles react in a collision (in a elastic or inelastic way).

dt: this is the integration time for the simulation, since we are using a simple integration equiation “At+1 = At + dt * n”, values should be slow to avoid errors produced for big changes in big time steps.

quality: this variable defines the quantity of particles used for each voxel to calculate the densities and forces, changing this value speed up the process but the simulation suffers from lack of information, so the visual quality is somehow hurt.

You can also see the position, velocity and density field applied as shader colors in the particles changing the shader option in the control panel. Finally you can swith from point to billboard sphere rendering method. All these variables can be updated with the control panel, once you update them you only have to restart the simulation.

Finally here are some images of our first breaking dam simulation.


Simulation showing the densities of each particle.


Simulation showing the position field of the particles.


Simulation showing the velocity field of the particles.

We would like to thanks AlteredQ for the help of testing the simulator, sadly we couldn´t find a solution for some configurations, so we have made a video if you can not play the simulation correctly. The simulations works fine with these configurations:

- Mac OS (Lion) with AMD Radeom HD 6970M 2Gb using Google Chrome and Firefox.
- Mac OS (Snow Leopard) with AMD Radeom HD 6750M 1Gb using Google Chrome and Firefox.
- Mac Os (Leopard) with Nvidia 8800 GT 512Mb using Firefox.
- Max Os (Snow Leopard) with Nvidia 9600GT 512Mb using Google Chrome and Firefox.
- Windows XP with Nvidia 8800GT 512Mb using Google Chrome.
- Windows 7 with AMD Radeom HD 6750M 1Gb using Google Chrome and Firefox.
- Windows 7 with Nvidia 9600GT 512Mb using Google Chrome and Firefox.

Feel free to try it, and if it´s broken please let us know your configuration, and what exactly you have seen while you played the simulation. It is our first version of the implementation, so we hope to make it faster using a more convenient neighborhood search. We hope that future posts will go on two branches, SPH optimizations and particles rendering using marching cubes.

 

Curl Noise + Volume Shadow particles

Curl Noise + Volume Shadow particles

05, Aug 2011/Categories 3D, webGL/6 Comments

view the high resolution example
view the mid resolution example
view the low resolution example

A good link is a good gift…

When we try to code anything in our studio, the first thing we do is to search for information about the subject in order to find papers and techniques that could help us to develop what we want. The bad thing about it is that you loose a lot of time in Google trying to find that specific paper that reveals the “secrets” of  the math behind the code.

But some time thing are even worst, because we do not even know what we are searching for, and this is when a good starting link is a good “gift”. In our case finding this blog has been a perfect starting point to find GPU techniques and papers related to those techniques.

The work made by these guys is amazing, but more important is that they talk about all the methods used in their works, so you can start searching by your own if you like to do something like. In our case we wanted to make some particles animations like the ones found here, and they have the perfect starting point in this post.

Two main things we would have to deal with in order to get a simple particle animation, these are:

  • Create a fluid like animation for the particles.
  • Shade the particles to give some volumetric effect.

Curl Noise:

In the post from DirectToVideo we found two lines that called our attention, they talked about a technique to fake velocity fields for a procedural fluid flow called “Curl Noise” created by Robert Bridson, and this is how we found this paper. In this paper the author defines a divergence free noise suitable for velocities simulations.

Using divergence free potentials is important for a flow simulation because it avoids sinks in the flow (try to think that with this you will not get all the particles going to a single final position, like a hole in space). Implementing is quite simple because you only have to calculate the differentials of the potentials (noises) used to create your field.

To create the potentials used for the velocities we used the equations from the next image. The only assumption that we made is that every scalar component from the potential vector field is a 2D function that would satisfy the rotor operator.

The second group of equations explains our assumption, we defined each axis potential as a function of the other two axes in order to use 2D textures for the tree potentials, this way we do not need 3d volume textures for each potential. With this assumption we ended with tree 2d textures that defined our potential so the velocity would be defined as the rotor operator applied to the potentials field as you can see in the equations. To calculate the partial derivates we only had to calculate the finite difference the given textures, and the good thing is that since we have our point defined in a texture space we could use this position to get the potential value for the tree defined potentials.

Now that we know how to create the velocity field we needed tree good noise textures to work with, this is an issue because if we load some external textures our development would be tied to those images. So we decided to create the noise textures by ourselves.

Our first approach was to make textures using perlin noise functions, but we then found that they are somehow expensive to calculate so we opted for a more practical solution, simplex noise, and the best thing is that we found one webGL  hack here. With the noise functions solved we only needed to make it turbulent, so we found that we could add higher frequencies to get a turbulent noise, you can read a full explanation here.

This is how we ended creating our own textures that we could change in realtime (not in every step), and would allow us to find the best noise suitable for our needs. In the next image you can see the same base noise with a without turbulence.

Saving the potentials as textures add a new problem to solve, textures in webGL only saves 8 bits per channel, so if you naively try to save the potential value in one channel you will only save a 256 units space value. This means that if your noise is defined in the rank [0-1] you will find jumps of 1/256 in your noise. To solve this you can pack your one dimentional value in the RGB components of the textures, this would give you a resolution of 2**24 for a given value.

We use two functions to pack and unpack floating point values in the textures, these are defined like this:

vec4 pack(float value) {
return vec4(floor(value * 256.0)/255.0, floor(fract(value * 256.0) * 256.0)/255.0 , floor(fract(value * 65536.0) * 256.0)/255.0, 1.0);
}

float unpack(vec4 pos) {
return pos[0] * 255.0 / 256.0 + pos[1] * 255.0 / 65536.0 + pos[2] * 255.0 / 16777216.0;
}

The first function wraps one value in RGB components, so you can save one value with very high resolution, the second function unwrap the RGB components to the original value. Note that the function do not save negative values, so is up to the programmer to define how the unwrapped value is going to be treated. The bad news about it is that you need a whole texture to save one value, so if you want to save one 3D position of one texture you will need three textures for it.

The last paragraph defines the main bottleneck of the application, since we have to save the particles positions to run the simulation we need to traverse all the particles three times to save each axe position in a different texture. In the next image you can see one of the axis aligned position textures used.

Particles shading:

To shade the particles we applied the same technique from our previous posts (http://www.miaumiau.cat/2011/06/shadow-particles-part-ii-optimizations/), but we modified it to accept a variable bucket quantity so we could adapt the final resolution of the shadows in the volume. Since we can set many more slices we saw that the interpolation used between two slices was not necessary, so we took it away to get some more speed.

Now that we can change the buckets count we had to look for the best compromise among the buckets size, the quality of the shadows volume, and the quality of the floors shadow, for a 2048 texture size we found that the initial 8×8 buckets count is a very good compromise for the given parameters. With this in mind if you want to implement good quality shadows in your volume we recommend 16X16 buckets slices, but the final shadow for the floor will be very slow. In the next image you can see one test of the shadows with and without color.

The process:

With the curl noise, the volume shadows and the floating point issues solved we ended with an algorithm of seven steps, this process requires five render targets changes and we need to traverse all the particles five times. Three times to save the particle´s current position, one for the shadows and one final to paint them all.

We also perform some window, or 2D, calculations over the fragment shader to get the correct blurring for the final composition, we also calculate the potentials using the fragment shader.

Two things should be implemented to get a correct particle´s rendering, first of all, if we want to render all the particles we will need to use blending, but this requires that all the particles get to be sorted (if not you will se much less particles than the ones you expect). The other thing is to use another texture to control the life of the particle. This way we don´t have to reposition the particle in the emitter every time it reached it´s “y” limit.

Some workarounds:

The previous commented process have a huge bottleneck saving all the particles positions, so we wanted something faster to get the particles position. So instead of calculating the positions in a step way, we defined the positions based on a path (a Bezier path), this Bezier would be affected by the noise so we only would need one texture to save the Bezier points.

One good thing about using a Bezier path is that you can use the linear interpolation of the textures to get the current particle position in the curve, this way we don´t have to save all the particles  positions, instead we only have to save some Bezier points in a texture (to be concrete 512 points) and then we read the particles positions using the gl.LINEAR function to calculate the point interpolation from the texture reading (all of this for free).

So if you can have as many paths as the height of your texture, and each path can have as many points as the width of the texture. In our case we only used to paths, you can see some images of the implementation.

The previous image shows the positions texture using two paths, and the 3D result of the Bezier applying the curl noise. This way we don´t need to save the positions in three textures. The bad thing about it is that all the particles are constrained to the Bezier path, so we cannot perform some particles dispersion.

Instead of applying the curl noise to the Bezier path we could use some of the total particles to create a volumetric path, and use some more to make some particles dispersion with the first technique. In the next image you can see how the Bezier paths are rendered using just one texture.




100K Particles self-shaded from miaumiau interactive studio on Vimeo.

So if we use the Bezier path to move some initial particles, and we also move the emitter position over the path, and we finally define the exit initial velocity for the dispersion particles in the emitter using the bezier´s tangent, we think we could make something like this, but with much less particles :(

Shadow Particles (part II: Optimizations)

Shadow Particles (part II: Optimizations)

28, Jun 2011/Categories 3D, webGL/2 Comments

VIEW THE EXAMPLE

In the last post we talked about the implementation of volume shadows in a particle system, this last approach used a couple of for loops in order to define the final shadow intensity for each particle.  This loops could make that the application has to evaluate 32 * nParticles the shadow map, making the process somehow very low.

So we tried a pair of optimizations that would deal with the defects of the previous work, the first thing we have done is to change all the texture reading to the fragment shader, and then we tried to make the sum of the shadows for each step with the fragment shader also. At the end of the working process we defined the next steps to get the shadows done.

First Step: Shadow Buckets

The first step is no different that the last approach, we defined 64 buckets in the vertex shader using the depth of the particles assuming that all of them are inside a bounding box, then we render them in a shadow framebuffer defining a color depending on the shadowDarkness variable. For this step we use the same getDepthSlice function in the vertex Shader.

void getDepthSlice(float offset, mat4 transformMatrix) {
vec3 vertexPosition = mix(aVertexFinalPosition, aVertexPosition, uMorpher);
eyeVector = vec4(vertexPosition, 1.0);
eyeVector = transformMatrix * eyeVector;
gl_Position = uPMatrix * eyeVector;
desface.w = 64.0 * (gl_Position.z – uShadowRatio + uBoundingRadio) / (2.0 * uBoundingRadio) + offset;
desface.z = floor(desface.w);
desface.x = mod(desface.z, 8.0);
desface.y = floor(desface.z / 8.0);
desface.y = 8.0 – desface.y;
gl_Position.x -= gl_Position.w;
gl_Position.x += gl_Position.w / 8.0;
gl_Position.x += 2.0 * gl_Position.w * desface.x / 8.0;
gl_Position.y += gl_Position.w;
gl_Position.y -= gl_Position.w / 8.0;
gl_Position.y -= 2.0 * gl_Position.w * desface.y / 8.0;
}

There is one very important difference in this function from the previous one, this new one requires two parameters, one offset and the transformMatrix. The second one is used to tell the function which view (camera view or light view) is used to define the buckets, this is usefull If you try to use the buckets system to sort the particles and blend the buckets.

The first parameters offers you the possibility to obtain the next or previous bucket from a given depth, with this we can interpolate from two buckets or layers, that would give us a linear gradient of shadows between two layers instead of 64 fixed steps.

Second Step: Shadow blending

This step is the main optimization from this code, what it does is to blend the previous bucket into the next so you can define all the shadows for each layer in 64 steps. Blending the buckets require to use the gl.BLEND and to define the gl.blendFunc using gl.ONE and gl.ONE.

Then you have to render one quad to the shadow framebuffer using the shadow map of the shadow framebuffer as a texture, the most important thing here is to get the bucket coordinates for each of the 64 passes for the quad.  This is also done in the vertex shader, with each pass, one depth is defined for the quad and the vertex shader defines the UV texture coordinates using the next chunk of code

if(uShadowPass == 2.0) {
desface.x = mod(uDepth, 8.0);
desface.y = floor((uDepth) / 8.0);
gl_Position = vec4(aVertexPosition, 1.0);
gl_Position.xy += vec2(1.0, 1.0);
gl_Position.xy /= 2.0;
gl_Position.xy = (gl_Position.xy + desface.xy) / 4.0;
gl_Position.xy -= vec2(1.0, 1.0);
desface.x = mod((uDepth – 1.0), 8.0);
desface.y = floor((uDepth – 1.0) / 8.0);
vTextureUV.xy = (desface.xy + aVertexUV.xy) / 8.0;
}

There is no big deal here, what it does is to define the coordinates depending on the depth written as X,Y values (depth = 8 * y + x), as you can read we define one vec2 call “desface” that holds the X,Y values used for the texture. The next image shows the result of this blending

Third Step: Blur passes

This step has no difficulty, but requires the use of a composition framebuffer for a intermediate destination, this is made to avoid the forward loops in one texture. We defined the blur in two differents passes, one for the X component and another for the Y component, we used a simple box blur but you can perform a gausian blur or some other blur you like.

Fourth Step: Particles rendering

In this final step we render all the particles to the window framebuffer using the shadow texture, the lightTransformMatrix and the cameraTransformMatrix, first we transform then to the light view to read the correct shadow intensity from the map.

In this part of the process you could read the shadow force for the particle depending on the depth and the bucket, but this would make that all the particles in the same bucket would show the same “force” even though their initial depth are different. This is when we use the offset parameter from the getDepthSlice function, for a given depth and bucket we can obtain the current bucket and the next one and interpolate the shadow force of the current depth between those two layer.

Once the shadow force is defined we have to transform the particle position to the camera view to obtain the light attenuation and position of the particle, and finally render it.

In the vertex shader we have this chuck of code:

if(uShadowPass == 3.0) {
getDepthSlice(1.0, uLightTransformMatrix);
vTextureUVNext = vec2((gl_Position.x / gl_Position.w + 1.0) / 2.0, (gl_Position.y / gl_Position.w + 1.0) / 2.0);
vInterpolator = desface.z – desface.w;
getDepthSlice(0.0, uLightTransformMatrix);
vTextureUV = vec2((gl_Position.x / gl_Position.w + 1.0) / 2.0, (gl_Position.y / gl_Position.w + 1.0) / 2.0);
vec3 vertexPosition = mix(aVertexFinalPosition, aVertexPosition, uMorpher);
eyeVector = vec4(vertexPosition, 1.0);
vec4 lightVector = vec4(uLightVector, 1.0);
float distance = abs(dot(lightVector.xyz , eyeVector.xyz) – dot(lightVector.xyz , lightVector.xyz)) / length(lightVector.xyz);
vAttenuation = distance * distance;
vAttenuation = vAttenuation < 1.0 ? 1.0 : vAttenuation;
eyeVector = uCameraTransformMatrix * eyeVector;
gl_Position = uPMatrix * eyeVector;
}

Here we get the uvTexture coordinates for the current bucket and the next one (notice that we call the getDepthSlice function using an offset of 0.0 and 1,0), there is also the attenuation calculation in for each particle in this shader.

In the fragment shader the code goes like this:

if(uShadowPass == 3.0) {
vec4 color1 = vec4(0.7, 0.3, 0.2, 0.05);
vec4 color2 = vec4(0.9, 0.8, 0.2, 1.0);
gl_FragColor = mix(color2, color1, uMorpher);
float shadow;
if(uUseInterpolation) {
float shadowForce = texture2D(uSampler, vec2(vTextureUV.x, vTextureUV.y)).r;
float shadowForceNext = texture2D(uSampler, vec2(vTextureUVNext.x, vTextureUVNext.y)).r;
shadow = mix(shadowForceNext, shadowForce, vInterpolator);
} else {
shadow = texture2D(uSampler, vec2(vTextureUV.x, vTextureUV.y)).r;
}
gl_FragColor.rgb *= (1.0 – shadow);
if(uUseAttenuation) gl_FragColor.rgb = 1.0 * gl_FragColor.rgb / vAttenuation;
gl_FragColor += ambientColor;
}

Here we can define a color tint for the shadow, we also read here the shadow from the texture in the current bucket and the next one to perform the interpolation fot the final shadow intensity.

And that is it, four steps to render the shadows, this steps gives you the chance to blur the shadows, make some interpolations between two shadow layers and the possibility to tint the shadows. But it has some disadvantages too, this process requires multiple changes of render targets, and it requires a good variables definition to make it work.

In the example (click on the first image if you haven´t done it yet) you can see these variables to play with:

- shadowBlur: it defines the blur in the shadow map.

- shadowDarkness: as it name says, it makes the shadow darkness or lighter inside the volume and the shadow applied in the floor.

- shadowTint: it “tints” the shadow with the particles color.

- lightAngle: it allows you to change the light direction from 0 to 45 degrees, this helps you to see how the shadows change in the volume and on the floor depending on the position of the directional light (remember that it´s always pointing to the center of the object).

- cameraRatio: this ratio gets you closer or farther to the volume.

- useAttenuation: it toggles the light attenuation over the distante (squared), if on  you will see the effects of the shadows plus the attenuation, if off you will see only the shadows in the volume.

- useInterpolation: it toggles the interpolation between two buckets for a givven depth of a particle in those two layers. With a high bRatio and this variable off you should see the segmentations of the shadow in the volume.

- showTexture: it shows you the final shadow map used for the volume shadows and the floor shadow. To read it right you should start from botton to the top. The closest points to the light are in the left down corner of the image, and the final shadow (the one used in the floor) is in the top right corner of the imagen.

- bRatio: this is the most important variable of the whole process, this float defines the bounding sphere ratio that contains the volume. You must define one sphere that holds all the particles inside or you will find some glitches in your rendering. But beware, you could be tempted to define a big bounding sphere for the current volume, but what you are really doing is assigning less layers for the volume.

For this example is really ease to define the bRatio, but when you are animating the lights and the particles things starts to get messy and the only thing left is to define a big bounding sphere. If this is the case you could end up with 16 to 8 layers for one volume, and that is when the interpolation of the buckets information helps. Try to play around with this variable and turn on and off the interpolation and you will see the effects of it.

As we said before we hope we could write a third article using this method to do some animations.

Shadow Particles (part I: Shadow Mapping)

Shadow Particles (part I: Shadow Mapping)

15, Jun 2011/Categories 3D, webGL/3 Comments

VIEW EXAMPLE

About one year without writing so, this is the first of three very verbose posts about shadow particles…

One year and a half ago Félix and me were talking about how we could implement a good shading effect for a particle system made in Flash. In those days people could render 100.000 particles with no problem using point lighting or color shading based on the position of the particle, but we wanted something more, and there is when we started to talk about shadows on particles.

Shadowing the particles was not an easy task (for Flash in those days), but we could find very useful information about it in the web, the GPU GEMS has a very good article about rendering volume objects, and we also found a very handy paper about smoke particles written by Nvidia.

Both articles talked about the “half angle rendering”, this kind of rendering uses the half angle between the view direction and the light direction to sort the particles and calculate the shadows for them. The bad news about it is that we must sort the particles with that algorithm.

We tried to implement the algorithm in Flash but it was a total failure, because of that we parked the idea for a not so immediate future. Some time later we got news about webGL and Stage3D (Molehill) so we got very excited about the idea of having hardware acceleration (and the possibility to implement the shadows). Even though we still love Flash (with its future acceleration).

Two weeks ago we finally got some time to play around with webGL, so we started to learn the insides of it, we really recommend the reading of this page, it has many tuttotials that actually helped us. We also took a look to the OpenGL ES 2.0 Specification in order to see what functions we could for the shading.

Another good thing to get in touch with webGL is to read to “classes” of the Three.js Framework, these are WebGLRenderer and WebGLShaders, the first one is very explicit of how to render objects with webGL, but the second one is a must to read, this class wraps all the shaders written and you can explore all the shaders that Three.js has to get an idea of how to program your own shaders.

Shadow Particles, “our” aproach:

Shadowing particles is quite easy in OpenGL if you have all the extensions that it offers in the desktop (the full OpenGL version), but in the web (OpenGL ES 2.0) there are some issues that has to be solved to make it work, these issues are:

- There is no 3d Texture in webGL.
- There is no gl_FragData in the fragment shader.
- Sorting particles in the web should be done by Javascript or in the best case with the graphic card.

If we don´t have 3d Textures we can´t save the shadow information in layers as we first wanted, by the other hand if there is no way to use gl_FragData in the fragment shader we must render the different information in different passes for each channel data we want. Finally we decided that sorting in JS is not an option, and trying to sort in the graphic card is one task we didn´t even consider (too hard for us ☹).

So at the end we decided to use the following algorithm:

- All the particles would be “part” of one “object”, by this we mean that we would make only one call to the graphic card to paint all the particles, we would provide one array buffer of the quantity of the particles and their initial positions. This means that all the animations would be made in the vertex shader.

- Render the shadow information in a layered 2D texture: we would use a 8×8 layered texture in order to have 64 shadow layers, the main idea behind it is that each layer would represent the shadow that affects the next layer. It is like if you slice the volume in parts and you render the shadow that the first part gives to the second, the second to the third and so on.

- Use Render to texture from a framebuffer to define the 2D shadow texure: since OpenGL Es gives us the possibility to use framebuffers we thought that we should render the information in one buffer in order to get the texture. In the next pass we could use the texture in the shaders to define the points colors.

- Use a second pass to define the final color of the particles and paint them the window framebuffer.

So as you could read before we would use two passes rendering all the particles at once, the first pass would get the shadow information and the final pass would render the particles with the “correct” color.

Rendering the shadow map:

The shadow map that we wanted to obtain is something like the following picture, it has 64 “buckets” for 64 different depth defined with transforming the vertex positions to the light UCS. Once the transformation is done the 64 layers must be between the minimum and the maximum z of the volume containing all the particles.

The bad news of this is that you must have a bounding box to define the zMin and the zMax, and it gets worst when you animate your geometry. To solve this instead of a bounding box we decided to use a bounding sphere that could give us one measure (the ratio) that contains all the geometry, this is how the “shadowBoundingRatio” was born.

The “shadowBoundingRatio” is a ratio defined by the user (the programmer) based on the geometry that is going to be renderer (if you are going to animate the geometry there are some hacks that could be done to properly change this variable).

Now that we have one way to define the 64 depth steps, we only have to use the vertex shader to render the particles in their bucket, the bucket is defined using these equations:

desface.z = floor(steps * (gl_Position.z – uShadowRatio + uBoundingRadio) / (2.0 * uBoundingRadio));
desface.x = mod(desface.z, sqrtSteps);
desface.y = floor(desface.z / sqrtSteps);

The first line of code define the actual depth step of the particle based on the shadowBoundingRatio (uniform uBoundingRadio in the vertex shader) and the uShadowRatio (we explain it later). The second and third line define the x and y bucket position from 0 to 7 in the 64 buckets texture to properly render the particle.

The uniform variable uShadowRatio is the equivalent of the camera distance from one target. It allows us to adapt the slice render to the position in the texture (think of it of the only way that you could see all the information without mixing it in other buckets o rendering it too small).

Once the depth step and the actual bucket are defined we only have to place the particle in the framebuffer, we alter the gl_Position of the particle using the previous information, so getting the actual bucket and defining the gl_Position is made with this function in the vertex shader:

void getDepthSlice() {

eyeVector = uLightTransformMatrix * eyeVector;
gl_Position = uPMatrix * eyeVector;
desface.z = floor(steps * (gl_Position.z – uShadowRatio + uBoundingRadio) / (2.0 * uBoundingRadio));
desface.x = mod(desface.z, sqrtSteps);
desface.y = floor(desface.z / sqrtSteps);

gl_Position.x -= gl_Position.w;
gl_Position.x += gl_Position.w / sqrtSteps;
gl_Position.x += 2.0 * gl_Position.w * desface.x / sqrtSteps;

gl_Position.y += gl_Position.w;
gl_Position.y -= gl_Position.w / sqrt(steps);
gl_Position.y -= 2.0 * gl_Position.w * desface.y / sqrtSteps;

 

}

Moving each vertex requires to know that the each position is normalized, this means that the each factor goes from the [0-1] range. The final position is defined by this equation:

X = 0.5 * (gl_Position.x / gl_Position.w + 1.0) * ViewportSize
Y = 0.5 * (gl_Position.y / gl_Position.w + 1.0) * ViewportSize

So every move have to be done taking into account the “w” factor of the position, that is why we move the particles relative to the “w” factor in the previous function. A further reading of this is highly recommended here.

With the previous function in the vertex shader we obtain one framebuffer that we could render to a texture for the next render pass, you can see one example of it clicking in the next image. This example is the shadow map for the main example, so you could se how the shadow changes because of the light´s movement.

The last example shows a white map with black particles, actually in the framebuffer we use a vec4(0.0, 0.0, 0.0, 0.0) rgba background color, and the particles are painted with a 1.0 / 64.0 * vec4(1.0, 1.0, 1.0, 0.0) particle color. We make this so if we want that a particle has no light incidence all the 64 layers should give shadow to the particle, the less layers affect the particle position, the less dark the shadow would be in the actual particle.

Once the particles are draw in the framebuffer we only have to get the final texture to the next pass, one good example of how to make this appears in this tutorial from LearningWebGL.

Rendering the second pass:

The second pass is used to simulate the light interaction with each particle, since particles does not have a normal not very much can be done with real lights simulations, but one thing can be done and is to define a light attenuation based on the distance from the particle to the light source. If the distance is defined with the light position, we can get a point light source, if it is defined with the plane conformed with the light vector (light.target – light.position) and the light position, we can define a direction light.

Since the shadows in the layers are renderer parallel to each layer, the final shadows are relative to a directional light source, but in our approximation we have a mix of point light attenuation and directional lights shadows (this is because the point to point is easier to calculate than a plane to point distance). But if you want to be rigorous you can change the attenuation distance calculation for a direction light.

The final attenuation is by definition the square of the distance obtained (via directional or point light), but it can be defined by any power of a fractional number to obtain a desired effect (linear attenuation, square attenuation, cubic attenuation).

To change the shadow darkness you can change the divider of the vec4 color used in the vertex shader (the 1.0/64.0), if the 64.0 divisor is slower you would have darker shadows, higher values would make the shadow softer.

If you think the attenuation is right, and the shadows are fine, but the whole thing is too dark, you can always perform an ambient light (adding a fractional color to all the particles) after you implement the shadow and attenuation.

Not everything is perfect….

The second pass has a little issue that we will implement for a future post, each particle has a shadow contribution of all the previous depth layers, so you have to read the shadow information from all the previous layers for each particle and them sum the collected information to get the shadow intensity for this particle.

This means that “for” bucles must be used in the vertex shader, so for the first depth there are no contributions of previous depths but for the n depth we have a contribution of (n-1) depths, and this make the shading quite slowly compared to the only use of the attenuation. How slowly does it make it, well if we calculate the sum of n from n = 1 to n = 64 we have 2048 passes (it is not THAT bad), but this number has to be divided by the total depths (64) so it gives us that it has a 32 textures readings for each depth (this slows down the performance a lot).

There is a work around to this problem, and it is as simple as to apply 64 more passes to the framebuffer to sum the individual buckets, so the first bucket would have no information, the second would be the sum of the first and the second, the third would be the sum of the first, the second and the third and so on.

In each step of the composition we add to the next bucket the previous sum of the previous step, each bucket would have all the information in itself. This means that we do not need to use the “for” bucles to read the texture and sum the components in the shader. Another benefit of this composition is that the final bucket contains all the shadow information that the volume produce on one surface, so you can project the shadow from the particles to any object in your scene.

This optimization of the code also allow us to redefine the second rendering pass in order to recreate a blending among the particles, we can render each depth with its final color like we did in the shadow texture and then we would have the particles sorted in layers. Finally we only would add each part of the frameBuffer color texture to get the proper blending to transparent particles.

There is another hack for the “for” issues, it´s actually what we are doing in this example, and it´s to “assume” that the stronger shadows for one leyer are the ones closest to it, so you don´t have to sum all the previous layers for one depth, but a limit of them (we are using only ten layers). The drawback of this hack is that it gives many artifacts (but it´s a little faster).

In the next post we will make the improvements we have talked before, a second post will talk about the composition in the frameBuffer (for the shadow texture and the second pass of the color texture), and finally a third post will talk about how to use this algorithm to animate particles (taking into account the limitations of the bounding sphere ratio), and how to render the particles shadow to a surface.

For this time we have a little example that shows all we have discussed in this post, one simple volume that you can press the scene to drag the particles, turning on and off the light attenuation and shadows to see the shading on the them. You can see it clicking on the image at the beginning of the post.

Surface Tracker

Surface Tracker

26, Jul 2010/Categories AS3, General/2 Comments

VIEW EXAMPLE

Many sites have been done implementing the interaction between video and photos, text, or any other additional assets that could be inserted in the original video to enhance the final experience. Tracking simple planes can be done very easy with the help of After-Effects or any other post- production software, but when it comes to track complex surfaces there is not a simple way to do it.
We were asked to develop one way to track surfaces for one project designed by David Navarro (www.davidnavarro.net). David wanted to track one t-shirt in order to place the users photo in the chest of the t-shirt. So after thinking how we could make it we got the main idea in five simple steps:
1. Track predefined control points placed in the t-shirt with the help of After-Effects.
2. Modify the video in order to create the mask and the shadings that would affect the mesh previously created.
3. Define one Nurbs-Mesh using the tracked control points in the t-shirt.
4. Create a color correction for the inserted image by the user.
5. Integrate the mesh and the video for the final result.

Tracking Control Points:
When it comes to track control points in a t-shirt the first idea that came to us was to draw a grid of points in a t-shirt, and try to track them with Motion. The main problem with it is that if you define an 8×8 grid you would end up tracking 64 for points per frame, (that would probably not be well tracked so you would be editing many frames in order to get the result that would be required).
So instead of tracking points we ended up tracking regions, these regions are defined as four corners areas that would be equidistantly defined in the original grid of points (see next image).

In the image below there is a rectangular region of the chest that would be changed with one image defined by the user. This region contains rectangular areas that would be tracked using Mocha (a plug-in for After Effects). Since all the squares share at least one corner with each other, only the black squares in the t-shirt must be tracked in order to get all the control points needed.
Once the squares are tracked for all the needed frames, you should re-order the four corners obtained for each square in order to rewrite the points in a row-column ordered format used to create one interpolated Nurbs Mesh. Finally you could export these points with the X, Y and Frame information in a XML to use in Action Script.

Video Masking and Shading:
Since we are not specialist in video post-production, we counted with the help of Manolo Calvo (www.freelancetv.es), he was responsible of the final tracking of the rectangular regions, but most important He was the guy that made the masking and shading of the video in order to superimpose lights and shadows to the generated mesh. Video masking can be done using the mesh generated with the original control points, one thing to take into account is that this mask should be smaller than the final integrated mesh in order to avoid holes between the mesh and the video.
In this work this was not the case, Manolo made the masking by hand, and he also painted the lights and shadows over the video in order to give us a final composition with the alpha channel ready to integrate the final mesh. In the next image you can see the final result of the video composition with the lights and shadows added.

Creating the Nurbs Mesh:
Using the Classes commented in these previous posts (http://labs.miaumiau.cat/?p=178, http://labs.miaumiau.cat/?p=338), we could interpolate one mesh with the original control points, the final mesh could have the resolution needed to show the final image in a flexible way, and it also allowed us to modify the behavior of the interpolation (the degree of the final mesh in the u,v directions). Depending of the typology of the surface you could decide the way you could interpolate the surface, if the internal points do not affect the final interpolation, you could interpolate using only the border curves of the surface, in the other hand if the surface has big changes in its interior you should interpolate using all the control points defined in the grid of the t-shirt (or the surface you are tracking in the video).

Color Correction:
The final video made by Manolo had desaturated colors, low values of contrast and also low values of brightness, so in order to adapt the final mesh to the video some color corrections should be done to the image inserted by the user. In order to do so we programmed a set of modifiers for a given bitmap data and we tried with many pictures in order to define the final values of contrast, brightness, saturation and blur that would by applied to each image inserted. Every time one image is going to be inserted in the t-shirt the values of these parameters are changed to make the final color correction.
Using this correction and the shading created over the video would give the illusion of integration between the mesh and the video.

Video and Mesh Integration:
Having the video post-produced with the alpha channel and the lights and shadows painted in the final composition, integrating the mesh for the final result is very easy. The mesh must one layer under the video so this last one can affect the mesh representation like a multiply blend mode.
So using the five previous commented steps, we have programmed one simple example (click on the first image) that shows the final integration in the video. You can change between one image or the webcam, you can also change the depth of the video and the mesh to see how the video affects the mesh, and finally you can see how the mesh is created.

Finally we would like to thanks Manolo & Ivan for their invaluable help with the video tracking and post-production for this project.

NURBS in Flash (part 2)

22, Apr 2010/Categories 3D, AS3, General/3 Comments

NURBS Curve

Working with surfaces in Flash can be done in many ways, Away3D offers one way to work with Bezier patches that is suitable for many things (one example of this is this very nice example of the Utah teapot), but if you want a more precise interpolation among the control points, the Nurbs surfaces give you more control over them.

Nurbs surfaces require to understand how the Nurbs curves work, We have written one post about it, so if you haven´t read it yet press here. The surfaces are a extension of the curves, the only thing is that you´ll need to parameters (u, v) in order to define the surface, so when you are going to calculate the surface you need to calculate the curves in one direction “V” and the results are used to calculate the curves in the next direction “U”.

The main advantage of Nurbs surfaces against Bezier surfaces is the control of the degree for each direction, this means that for each surface you have two independent degree values (degreeU, degreeV), that define the local control interpolation for each direction. Nurbs surfaces also count with the posibility to adapt the weight for each control point in order to make the surface go closer to that point.

The modeller presented in this post allows you to work with one Nurbs surface of 7X7 control points, you can change the degree for each direction, the segmentation for the whole surface and the distribution of the domain region parameters.

Surface Tessellation:

Nurbs surfaces can be seen as a two dimensional non linear transformation of a rectangular domain region  [0-1] [0-1] to a parameter region. This behaviour presents one problem when you try to tessellate the surface because if the parameters in the domain space are equally distributed, the result of the tessellation in the parameter space is not going to present the same equal distribution. In order to overcome this issue a couple of cubic transformations are used to define the distribution of the parameters, these curves are located in the surface editor of the modeller (the light blue boxes).

If you rotate the surface (try to make it when is plannar) and change the sliders of the boxes you will see how the distribution of the segments change, this should be done in the plannar case for each change of the degree in order to get an equal distribution in the parameter space.

These curves don´t solve the distribution problem in a exact way. The best thing to do is to calculate the parameter that should be used for each step forcing the condition of a fixed step between each point in the curve. This process requires to precalculate the curve, then calculate the curve´s length to define the step for each pair of points, and finally use the first derivates of the curves to calculate the needed parameter. The bad thing about the previous process is that it requires many iterations for a given parameter in order to evaluate if the parameter fits the defined fixed step, so this is not a good solution for real time rendering.

Degree U,V:

The change of the degree in each direction change the interpolation factor (local control) for each direction, the same control points (with the same weight definitions) can result in NxM differents surfaces (being N the number of control points in the U direction and M the number of control points in the V direction). So you can have a linear interpolation in one direction and cubic interpolation in the other (second image), or you can have a quadratic interpolation in one direction and a cubic interpolation in the other (third image).


Segmentation (level of detail):

One advantage of working with parametric surfaces (Bezier or Nurbs) is the total control of the level of detail for a given surface. The tessellation of the surface can be done evaluating the error of the aproximation from the parametric surface, this means that defining the error (distance from a given parametric point to the tessellated point in the surface) you could know exactly the segmentation needed o satisfy the error. In the modeller you have an option for segmentations changes in the surfaces if you want to have a better or worst aproximation.

Changes on the segmentation can be used in real time to satisfy a required frame rate and level of detail, if you have many surfaces on one scene, you could define more segments for the surfaces close to the camera and reduce segments for the surfaces away from the focal view. This would help to speed up the rendering of one scene. The last option in the Nurbs editor control the total segments used to tessellate the surface, you can model with low segmentations (keeping a high frame rate to move the points and rotate the shape), and then you can see the final result giving more segmentations to the surface.  If you want to try the modeller just press on the next image.

NURBS Surfaces

The code:

As it is explained before, implementing Nurbs surfaces require the use of Nurbs curves, so part of the actual code is explained here so only a new function is needed to calculate the surface tessellation, just copy the code from below in the Nurbs class (from part 1).
[as]
//Function that generate a Nurbs surface…
public static function surface(u_cps : uint, points : Vector., Ne : uint = 10, Nn : uint = 10, degreeU : uint = 3, degreeV : uint = 3, order : int = -1, cU : Number = 1, cV : Number = 1) : Vector. {
var i : uint;
var j : uint;
var k : uint;
var output : Vector. = new Vector.();
var v_cps : uint = points.length / u_cps;
var cp_curves : Array = new Array();
var param : Number;

//Separate the points for each curve…
for(i = 0;i < u_cps; i++) {
cp_curves[i] = new Vector.();
for(j = i;j <= v_cps * (u_cps – 1) + i; j += u_cps) {
cp_curves[i].push(points[j]);
}
}

//Generate parameters if they are not defined…
if(Nurbs.Ne != Ne) {
Nurbs.Ne = Ne;
paramsE = [];
for (i = 1;i <= Ne; i++) {
param = (i – 1) / (Nn – 1);
paramsE.push(-2 * (1 – cV) * Math.pow(param, 3) + 3 * (1 – cV) * Math.pow(param, 2) + cV * param);
}
}

if(Nurbs.Nn != Nn) {
Nurbs.Nn = Nn;
paramsN = [];
for (i = 1;i <= Nn; i++) {
param = (i – 1) / (Nn – 1);
paramsN.push(-2 * (1 – cU) * Math.pow(param, 3) + 3 * (1 – cU) * Math.pow(param, 2) + cU * param);
}
}

var jMax : uint = paramsN.length;
var iMax : uint = paramsE.length;

for (j = 0;j < jMax; j++) {

//Points for the “V” curves…
var resultant_curve : Vector. = new Vector.();
for(k = 0;k < cp_curves.length; k++) {
resultant_curve.push(nurbs(paramsN[j], cp_curves[k], degreeU, order));
}

//Points fot the “U” curve….
for (i = 0;i < iMax; i++) {
output.push(nurbs(paramsE[i], resultant_curve, degreeV, order));
}
}

return output;
}
[/as]

There will be a third part of these series of post (Nurbs in Flash) based on how can the surfaces be rendered using the advantage of the parametrization for the vertex normal calculation and other features that speed up the rendering in real time (only for parametric surfaces).

NURBS in Flash (part 1)

07, Apr 2010/Categories AS3, General/4 Comments

NURBS Curve

This post is a little long so I will post it in two parts, the fist one is about NURBS in 2D (Curves) and the next one will be about NURBS in 3D (surfaces)…

Ever since I have been working in Flash I wanted to write one Class that would allow me to work with NURBS curves and surfaces the way I used to work them in 3dsMax, but getting to understand how they work is something somehow difficult because there is some math that needs to be applied in order to make it right.

NURBS is the acronym for Non Uniform Ration B-Splines, Nurbs curves and surfaces represent a precise mathematical representation of a free form curve or surface. All the theory needed to understand them is well explained here, but i´ll make a short introduction of the Nurbs concepts for a better undestanding of the example below.

B-Splines:

B-Splines are a generalization of the Bezier curves, these kind of curves use a set of interpolation functions called Basis Funcions , hence the “B” on “B-Splines”, the kind of interpolation of these function has the advantage of  allowing local control on the curve with the control points unlike the bezier curves. This means that meanwhile the all the bezier curve is affected by all the control points, the control points of the B-Spline curve only affect one portion of the whole curve. The construction of the basis function can be read here.

Rational B-Splines:

Rational B-Splines are the generalization of the B-Splines, the rational factor enables to change the control point influence in the curve using weight factors on each control point, changing the weight of one control point will make the curve get closer to or farther from the control point edited. Using rational b-splines conic sections can be represented exactly.

Non Uniform Rational B-Spline:

The non uniform part of the acronym is not another generalization,  it is the way rational b-splines start and end on the desired end points. The basis functions needed to interpolate the spline require a set of knots that define the way each basis function interacts with the global interpolation, the creation of the knots can be done in a uniform distribution, using the same distance among them, or using a non uniform distribution that force the interpolation start in the first control point and finish in the last control point. You can see the way the knots affect the resultant curve here.

The example from below has two curves with seven control points, the red one is a Nurbs curve and the blue one is a Bezier curve, the light blue lines represent the convex hull of both curves. You can move the control points to edit both curves and see the differences between them, you can also change the weights of the control points and the degree of the Nurbs curve to see how the degree change the general interpolation.

One of the most interesting things with Nurbs curves and Bezier curves is that for a given set of control points “n” (all with the same weight), the Nurbs curve of degree n-1 is equal as the Bezier curve created with the same control points. The lower the degree of the Nurbs curve the closer the curve will be to the convex hull. Finally if a degree of 1 is defined the curve will be exactly as the convex hull.

The AS3 Class:

The actionScript class calculates nurbs points on a curve and nurbs points on a surface. It has three static functions: Nurbs.nurbs(), Nurbs.nurbsCurve() and Nurbs.surface(), the parameters needed for each function depends on the theory for the Nurbs curves and surfaces. The whole class is not here, only the first function that allows to find the points on a Nurbs curve given a defined parameter.

The function Nurbs.nurbs() requieres four parameters:

p… parameter (value between 0 and  1).

controlPoints… vector of vectors 3D that define all the control points in the curve.

degree… degree of the curve interpolation.

order… the order alters the way the knots vector are created (uniform or non-uniform), usually it should not be used.

[as]
package math {
import flash.geom.Vector3D;

/**
* @author miaumiau.cat
*/
public class Nurbs {

//Function that calculates a point on a Nurbs curve…
public static function nurbs(p : Number, controlPoints : Vector.<Vector3D>, degreeParam : uint = 3, orderParam : int = -1) : Vector3D {

var cp : uint = controlPoints.length;
var degree : uint = degreeParam < 1 ? 1 : degreeParam > cp ? cp : degreeParam;
var order : uint = orderParam == -1 ? degree + 1 : orderParam < 0 ? degree + 1 : orderParam > degree + 1 ? degree + 1 : orderParam;
var totalKnots : uint = cp + degree + 1;

var i : uint;
var j : uint;
var baseVector : Vector.<Number> = new Vector.<Number>();
var knotsVector : Vector.<Number> = new Vector.<Number>();
var recurtion : uint;
var output : Vector3D = new Vector3D();
var f : Number;
var g : Number;
var rangoFinal : Number;
var t : Number;

i = 0;
while(i < order) {
knotsVector[i] = 0;
i += 1;
}

i = order;
while(i < cp) {
knotsVector.push(i – order + 1);
i += 1;
}

i = cp;
while(i < totalKnots) {
knotsVector.push(cp – order + 1);
i += 1;
}
var data : Number = 1;
rangoFinal = knotsVector[cp + 1];
t = p * (rangoFinal);

//Generate the Basis Functions…
i = 0;
recurtion = totalKnots – 1;
while(i < recurtion) {
baseVector[i] = (knotsVector[i] <= t && t <= knotsVector[i + 1] && knotsVector[i] < knotsVector[i + 1]) ? 1 : 0;
i += 1;
}
recurtion -= 1;
j = 1;
while(j <= degree) {
i = 0;
while(i < recurtion) {
f = (knotsVector[i + j] – knotsVector[i]);
g = (knotsVector[i + j + 1] – knotsVector[i + 1]);
f = f != 0 ? (t – knotsVector[i]) / f : 0;
g = g != 0 ? (knotsVector[i + j + 1] – t) / g : 0;
baseVector[i] = f * baseVector[i] + g * baseVector[i + 1];
i += 1;
}
if(p == data) {
var salida : String = “”;
for(var u : uint = 0;u < recurtion; u++) {
salida += baseVector[u] + “,”;
}
}
j += 1;
recurtion -= 1;
}

//Calculate the Rational points…
i = 0;
var divider : Number = 0;
while(i < cp) {
output.x += baseVector[i] * controlPoints[i].x * controlPoints[i].w;
output.y += baseVector[i] * controlPoints[i].y * controlPoints[i].w;
output.z += baseVector[i] * controlPoints[i].z * controlPoints[i].w;
divider += baseVector[i] * controlPoints[i].w;
i += 1;
}

output.x /= divider;
output.y /= divider;
output.z /= divider;
return output;
}
}
}

[/as]

Beziers (our approach…!)

12, Mar 2010/Categories AS3, General/3 Comments

Beziers

Bezier curves are widely used in Flash for many effects, from tweening one property to drawing complex curves and surfaces, but we  have never found one Class that would allow us to work with them the way we wanted to.

Our main goal writing one Class for Bezier manipulation is to have one tool that calculate Bezier interpolations for curves and surfaces of any degree, so the first thing to do was to study the mathematical model of the Bezier curves in these useful article. In the article the Bezier curves are treated as a combination of the Berstain polinomials which uses the factorial function that requires many calculations for high degrees.

The previous problem is solved calculating the coefficients of the curve for the given control points and saving them on a buffer Array that helps us to speed up the calculation process for further interpolations. As the article suggests the interpolation of any curve is given in the [0 - 1] domain.

The class itself is this one:
[as]
package math {
import flash.geom.Vector3D;

/**
* @author miaumiau.cat
* Clase que se encarga de generar una parametrización
* Bezier de trayectoria o superficie.
*
*/
public class Bezier {

//Variable que determina si se trabaja en 3D…
public static var _3D : Boolean = false;

//Variable que permite tener los datos de la combinatoria para cada grado…
private static var combinatoriaData : Array = new Array();
combinatoriaData[0] = [0];
combinatoriaData[1] = [1];

//A partír de 2 se tienen la cantidad mínima de puntos para interpolar
combinatoriaData[2] = new Array(1, 1);
combinatoriaData[3] = new Array(1, 2, 1);
combinatoriaData[4] = new Array(1, 3, 3, 1);
combinatoriaData[5] = new Array(1, 4, 6, 4, 1);
combinatoriaData[6] = new Array(1, 5, 10, 10, 5, 1);
combinatoriaData[7] = new Array(1, 6, 15, 20, 15, 6, 1);
combinatoriaData[8] = new Array(1, 7, 21, 35, 35, 21, 7, 1);
combinatoriaData[9] = new Array(1, 8, 28, 56, 70, 56, 28, 8, 1);
combinatoriaData[10] = new Array(1, 9, 36, 84, 126, 126, 84, 36, 9, 1);
combinatoriaData[11] = new Array(1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1);
combinatoriaData[12] = new Array(1, 11, 56, 165, 330, 462, 462, 330, 165, 56, 11, 1);

//Variables estáticas que permiten guardar los valores de segmentación de la malla (evita recalcular la cantidad de puntos…)
private static var Ne : uint = 0;
private static var Nn : uint = 0;
private static var paramsE : Array = new Array();
private static var paramsN : Array = new Array();

//Variable que contiene las constantes de la curva bezier para un grado y una parametrización (cantidad de puntos) fija.
private static var coeficientesCurva : Array = new Array();

//Función que calcula los coeficientes de la curva…
public static function bezier(t : Number, controlPoints : Vector.) : Vector3D {

var salida : Vector3D = new Vector3D;
var coeficientes : Array = new Array();
var i : uint;
var length : uint = controlPoints.length;
var n : uint = length – 1;

//Si los valores de la combinatoria para la cantidad de puntos no estan definidos, los defino…
if(combinatoriaData[length] == undefined) {
combinatoriaData[length] = setCoeficients(n);
}

//Determino los coeficientes…
for (i = 0;i <= n; i++) {
coeficientes[i] = combinatoriaData[length][i] * Math.pow(t, i) * Math.pow((1 – t), (n – i));
}

//Obtengo los valores de salida del vector3D…
for(i = 0;i <= n; i++) {
salida.x += coeficientes[i] * controlPoints[i].x;
salida.y += coeficientes[i] * controlPoints[i].y;
salida.z += coeficientes[i] * controlPoints[i].z;
}
salida.w = 1;

return salida;
}

//Función que se encarga de obtener un conjunto de puntos xyz agrupados en un array para una curva bezier…
public static function bezierPoints(m : uint, controlPoints : Array, borderDistance : Number = -1) : Vector. {
var salida : Vector. = new Vector.();
var length1 : uint = controlPoints.length;
var n : uint = controlPoints.length – 1;
var i : uint;
var j : uint;

//Si los valores de la combinatoria para la cantidad de puntos no estan definidos, los defino…
if(combinatoriaData[length1] == undefined) {
combinatoriaData[length1] = setCoeficients(n);
}

//Si no hay un arreglo que guarde la referencia para un grado definido, se define…
if(coeficientesCurva[n] == undefined) {
coeficientesCurva[n] = new Array();
}

//Si no hay un arreglo que guarde los coeficientes para “m” puntos se define…
//Se guarda un arreglo de cuatro dimensiones según la siguiente definición…
//
//coeficientes[n][m][j][i] donde:
//n : grado,
//m : cantidad de puntos a parametrizar,
//j : vector de coeficientes para un valor de parametrización perteneciente al rango [0, 1]
//i : coeficientes a multiplicar por cada punto para la parametrización anterior…
//

if(coeficientesCurva[n][m] == undefined) {
coeficientesCurva[n][m] = new Array();
for (j = 0;j < m; j++) {
//Defino la parametrización…
var delta : Number = j / (m – 1);
coeficientesCurva[n][m][j] = new Array();
for(i = 0;i <= n; i++) {
coeficientesCurva[n][m][j].push(combinatoriaData[length1][i] * Math.pow(delta, i) * Math.pow((1 – delta), (n – i)));
}
}
}

//Obtengo los distintos puntos que componen el vector de salida…
for(j = 0;j < m; j++) {
var pointer : Vector3D = new Vector3D(0, 0, 0, 0);
for(i = 0;i <= n; i++) { pointer.x += coeficientesCurva[n][m][j][i] * controlPoints[i].x; pointer.y += coeficientesCurva[n][m][j][i] * controlPoints[i].y; pointer.z += coeficientesCurva[n][m][j][i] * controlPoints[i].z; } salida.push(pointer); } //En caso de requerir bordes fijos… if(borderDistance > 0) {
//Determino la longitud de la curva y calculo el valor porcentual de la parametrización de 0 a 1
var length : Number = 0;
for(i = 1;i < salida.length; i++) {
length += Vector3D.distance(salida[i], salida[i - 1]);
}
var percentDistance : Number = borderDistance / length;
var centerDistance : Number = (1 – 2 * percentDistance) / (m – 3);
var parameters : Array = new Array();
var relativeCoeficients : Array = new Array();
parameters.push(0);
for(i = 0;i < m – 2; i++) {
parameters.push(percentDistance + i * centerDistance);
}
parameters.push(1);
//Determino el nuevo grupo de coeficientes a utilizar dependiendo de la parametrización…
for (j = 0;j < m; j++) {
relativeCoeficients.push(new Array());
for(i = 0;i <= n; i++) {
relativeCoeficients[j].push(combinatoriaData[controlPoints.length][i] * Math.pow(parameters[j], i) * Math.pow((1 – parameters[j]), (n – i)));
}
}
//Obtengo los nuevos puntos tomando en cuenta las separaciones requeridas…
salida.length = 0;
for(j = 0;j < m; j++) {
var pointer1 : Vector3D = new Vector3D(0, 0, 0, 0);
for(i = 0;i <= n; i++) {
pointer1.x += relativeCoeficients[j][i] * controlPoints[i].x;
pointer1.y += relativeCoeficients[j][i] * controlPoints[i].y;
if(_3D) pointer.z += relativeCoeficients[j][i] * controlPoints[i].z;
}
salida.push(pointer1);
}
}

return salida;
}

//Función que devuelve una superficie Bezier de MxN puntos de control (de borde e internos), se diferencia del patch porque esta última solo permite el control con las curvas de borde (no hay puntos internos…)
public static function surface(u_cps : uint, points : Vector., Ne : uint = 10, Nn : uint = 10, force : uint = 1) : Vector. {

var i : uint;
var j : uint;
var k : uint;
var r : uint;
var output : Vector. = new Vector.();
var v_cps : uint = points.length / u_cps;
var cp_curves : Array = new Array();

//Separo los puntos para obtener cada curva…
for(i = 0; i < u_cps; i++) {
cp_curves[i] = new Vector.();
for(j = i; j <= v_cps * (u_cps – 1) + i; j += u_cps) {
for(r = 0; r < force; r ++) {
cp_curves[i].push(points[j]);
}
}
}

//Genero los parámetros si no estan definidos…
if(Bezier.Ne != Ne && Bezier.Nn != Ne) {
Bezier.Ne = Ne;
Bezier.Nn = Nn;
paramsN = [];
paramsE = [];
for (i = 1;i <= Nn; i++) {
paramsN.push((i – 1) / (Nn – 1));
}

for (i = 1;i <= Ne; i++) {
paramsE.push((i – 1) / (Ne – 1));
}
}

//Genero el arreglo de puntos…
var jMax : uint = paramsN.length;
var iMax : uint = paramsE.length;

for (j = 0;j < jMax; j++) {

//Conjunto de puntos obtenidos de evaluar las curvas verticales…
var resultant_curve : Vector. = new Vector.();
for(k = 0; k < cp_curves.length; k++) {
for(r = 0; r < force; r++) {
resultant_curve.push(bezier(paramsN[j], cp_curves[k]));
}
}

//De los puntos obtenidos de las curvas verticales se obtiene una curva horizontal que al ser evaluada entrega cada punto de la superficie…
for (i = 0;i < iMax; i++) {
output.push(bezier(paramsE[i], resultant_curve));
}
}
return output;
}

//Función que se encarga de conseguir todos los puntos de una malla generada por bordes…
//Entrega los puntos ordenados de la siguiente manera suponiendo un arreglo de 3X3…
//
// 0 1 2
// 3 4 5
// 6 7 8
//

public static function getPatch(Xt0 : Vector3D, Xt1 : Vector3D, Xb0 : Vector3D, Xb1 : Vector3D, BT : Array, BB : Array, BL : Array, BR : Array, Ne : uint = 10, Nn : uint = 10, borderSeparation : Number = -1) : Array {

var i : uint;
var points : Array = new Array();

//Obtengo los puntos de las curvas para interpolar…
var b_top : Vector. = Bezier.bezierPoints(Ne, BT, borderSeparation);
var b_bottom : Vector. = Bezier.bezierPoints(Ne, BB, borderSeparation);
var b_left : Vector. = Bezier.bezierPoints(Nn, BL, borderSeparation);
var b_right : Vector. = Bezier.bezierPoints(Nn, BR, borderSeparation);

//Genero los parámetros si no estan definidos…
if(Bezier.Ne != Ne && Bezier.Nn != Ne) {
Bezier.Ne = Ne;
Bezier.Nn = Nn;
paramsN = [];
paramsE = [];
for (i = 1;i <= Nn; i++) {
paramsN.push((i – 1) / (Nn – 1));
}

for (i = 1;i <= Ne; i++) {
paramsE.push((i – 1) / (Ne – 1));
}
}

//Genero el arreglo de puntos…
var j : uint;
var jMax : uint = paramsN.length;
var iMax : uint = paramsE.length;
for (i = 0;i < iMax; i++) {
for (j = 0;j < jMax; j++) {
points.push(TFI(paramsE[i], paramsN[j], b_top[i], b_bottom[i], b_left[j], b_right[j], Xt0, Xt1, Xb0, Xb1));
}
}
return points;
}

//Función que permite liberar la memoria de la clase Bezier…
public static function clearMemory() : void {
coeficientesCurva = [];
paramsN = [];
paramsE = [];
combinatoriaData = [];

coeficientesCurva = paramsN = paramsE = combinatoriaData = null;
}

//Función que genera una interpolación tranfinita TFI para un grupo de cuatro curvas de borde bezier.
//Se pasan los puntos de borde Xt0, Xt1, Xb0, Xb1 y los valores e, n definidos de 0 a 1 para pasar de
//estado plano al definido por las cuatro curvas… se busca generar los puntos P intermedios…
//
//
// Xt0 Xt Xt Xt Xt Xt Xt Xt1
// Xl Xr
// Xl Xr
// Xl P Xr
// Xl Xr
// Xl Xr
// Xl Xr
// Xl Xr
// Xb0 Xb Xb Xb Xb Xb Xb Xb1
//
//
private static function TFI(e : Number, n : Number, Xt : Vector3D, Xb : Vector3D, Xl : Vector3D, Xr : Vector3D , Xt0 : Vector3D, Xt1 : Vector3D, Xb0 : Vector3D, Xb1 : Vector3D) : Vector3D {

var TFIPoint : Vector3D = new Vector3D();

//Evalúo la interpolación para cada coordenada del punto, x, y, z…
TFIPoint.x = (1 – n) * Xt.x + n * Xb.x + (1 – e) * Xl.x + e * Xr.x – (e * n * Xb1.x + e * (1 – n) * Xt1.x + n * (1 – e) * Xb0.x + (1 – n) * (1 – e) * Xt0.x);
TFIPoint.y = (1 – n) * Xt.y + n * Xb.y + (1 – e) * Xl.y + e * Xr.y – (e * n * Xb1.y + e * (1 – n) * Xt1.y + n * (1 – e) * Xb0.y + (1 – n) * (1 – e) * Xt0.y);
if(_3D) TFIPoint.z = (1 – n) * Xt.z + n * Xb.z + (1 – e) * Xl.z + e * Xr.z – (e * n * Xb1.z + e * (1 – n) * Xt1.z + n * (1 – e) * Xb0.z + (1 – n) * (1 – e) * Xt0.z);
TFIPoint.w = 1;
return TFIPoint;
}

//Función que realiza un set de los coeficientes en caso que cantidad de puntos sean distintos a los valores almacenados…
private static function setCoeficients(n : Number) : Array {
var datos : Array = new Array();
var i : Number;
for (i = 0;i <= n; i++) {
datos.push(MathFunctions.combinatoria(n, i));
}
return datos;
}

//fín del programa….
}
}
[/as]

As you can see all the comments are written in Spanish and many functions have so many parameters that is somehow difficult to understand what these functions do, so i´ll try to explain every function and it´s implementation in this post and in the near future post. Anyway i´ll comment some important features of it.

The combinatoriaData Array is a set of coefficients of the Berstain polinomials that are pre-calculated for degrees lower than 12, this helps us to speed the initial calculation for one interpolation. If you are going to work with a higher degree, the program calculates the coefficients for that degree and saves the values on the array.

The bezier function is the simplest function of the class, it allows to interpolate one single 3D point from a set of controlPoints and a parameter value between 0 an 1. You can use this function to get the interpolation point to point.

The bezierPoints function does the same thing but gives all the points of the interpolation in one array, you have to note that the “m” value is the number of points that you want to interpolate.  Since all the points are equidistant in the domain space you can´t control interpolation by distance from point to point in the parameter space, but there´s one parameter that allows you to at least control the initial distance between borders and the next / previous parameter point, if you would like one margin if you are treating with surfaces (i´ll promise i´ll make one example of this, because it´s difficult to understand just by reading it).

The surface function does the same thins that Away3D´s bezier surface function except that you can define the MxN control points as you want (not just 4X4 surface). To use the function you have to pass the M dimension, the Vector with the MxN points and the segmentation for the U,V direction of the surface. The last parameter, the “force” is the repetition of the control points in the surface to simulate weights  for a given controlPoints, this makes the surface approximate more to the controlPoints resulting more like a Nurbs curve with the same controlPoints. The previos function is used to calculate the surface in this post .

The getPatch function also returns a surface, but this surface has the property of being controlled only by the curves that defines the borders of the surface, it means that the internal values of the surface are only border dependant and there are no internal controlPoints for this surface. This function is more complicated to use, but is more versatil in some applications like transitions and deformations.
Finally we have written one simple implementation of this class using only the fist function “Bezier.bezier()”. If you want to see it, just click on the image in above and relax!.

Simple Surface Editor

08, Mar 2010/Categories 3D, AS3, General/1 Comment

Saddle

After we programmed the surface renderer for our last project, we decided to work a little more in order to implement a simple editor that could give use the opportunity to model simple objects based on a single surface. This editor enables us to work on further materials and uses for lighting, and it also allows us to make simple morphing surface animations in the near future.

The above image is a saddle modeled with the editor, the initial surface is a 7 x 7 RationalBezier Surface and all we did was to move points arround in order to get the shape done. The editor has a very easy interface to use, you can rotate the view by pressing on the screen and moving the mouse arround, when you press the control points you can drag them in 3d in order to adapt the shape to the convex hull made by the points.

There are some commands that enables you to view your model in differents  ways, these are:

  • w” : sets the wireframe mode.
  • t“: sets the texture mode.
  • l“: shows or hides the yellow guides lines.
  • c“: shows or hides the control points and the lights.
  • r“: reset the mesh and the lights.

Edition-mode

As you can see, you can model your object using the control points and the guide lines, changing your view on a wireframe to get the mesh without lighting and change the texture mode to see how the lights affect the mesh. You can also move the lights in every moment to see how the shading changes with the light´s position.

By the moment we have made two shapes with this editor, the first one is a simple attempt to make a basic surface with the shape of one mouse, and the next object was the saddle shown in the beginning of the post. The first mesh, the surface mouse, is shown on the next two images

Mouse-editionMode

Mouse

Since the surface is a rational Bezier surface, there are some artifacts that happend  in the border with the texture, this is because the weight of the control points in the borders make the surface tessellate with less spacing around them. If you want to give it a try just press on the editor´s image (next image) and have fun!…

Editor

More...
miaumiau interactive studio © 2011. All rights reserved