de Casteljau’s algorithm is basically a recursive linear interpolation of points along a multi-dimensional curve that ends in a function that can be evaluated at some point t. t is a variable from 0 to 1, and represents a fraction of the distance along a tangent between two points. What this means is that for every successive pair of points on the line, we linearly interpolate the points using t as the weight. At every level, this results in a new set of points, one less than the previous level. At the final level, a single point is left, which is the value of the point on the final Bezier curve at t fraction into the curve.
Evaluating a Bezier surface using de Casteljau’s algorithm appears to be relatively straightforward adaptation of the 1D algorithm into another dimension. First, we take a surface, represented by 16 control points. These points come from extending the 4 points that describe a cubic bezier curve into a 2D array, e.g. a 4x4 grid, described by 16 points. We then run the de Casteljau algorithm in two ‘directions’ - the first evaluates curves along one axis of the grid, resulting in 4 bezier curves. Each of these curves can be seen as a function of a parameter u, which is identical to the parameter t from the previous question. We then perform the same evaluation along the other axis, except, this time the 4 points that describe our curve come from evaluating the previous 4 functions at some u. Then, we introduce the parameter v (again, exactly analogous to t from the previous part) to flesh out this meta-curve, which results in a 2D surface, which is then displaced to create the 3D image.
Performing the computation itself was also relatively straightforward - a chain of a couple successive
lerps. Rather than using a loop in
evaluate1D, I just hardcode the 4 layers of
lerps that interpolate the 4 points into a single point. The final
evaluate function is basically identical, except the first layer of
lerps is actually calls to
evaluate1D. This is, in my opinion, faster to grok than thinking about the loop.
In all honesty, I have no idea why this works. I implemented the function similarly to how
Face::normal is implemented and debugged it until it worked and returned the result I expected. I couldn’t tell you why this normalization operation actually makes the mesh smoother.
Implementing this part was relatively straightforward, if timeconsuming and detail-oriented. I followed the HalfedgeOp Implementation Guide that was linked in the resources to the letter, and didn’t bother trying to do any optimizations, as the compiler will be able to optimize the unnecessary calls out anyways. Debugging involved painstakingly tracing every single pointer change and making sure that they were pointing to the right elements afterwards, and making sure to use the pre-existing pointers for halfedges on the ‘outside’ of the current mesh.
Implementing this required a lot of detail oriented explicit writing down of pointers and new locations. Several friends and I got together to write out the entire series of series of transformations on the original mesh on a whiteboard, and then we wrote out the next, twin, vector, edge, and face for every halfedge in the transformed mesh, which then became the basis of our solutions for this part. See part 6 for more details on steps that needed to be undertaken to fully complete this part as our original efforts were somewhat inadequate.
There were two parts to implementing this part. The first was simply implementing the algorithm as suggested in the ‘recipe’ - computing the neighbor sum and the various
newPositions, several serial loops to perform all the iterations and calcuations necessary, performing the splits, then flipping the requisite edges, and assigning the correct positions to the vertices from those previously calculated. Writing the algorithm itself was relatively straightforward and not particularly complicated. What it surfaced however, was issues in the part 5 solution. Friends and I went back to the literal whiteboard and re-derived the entire set of transformations with a few modifications, then when this failed, we abandoned the ‘write out the entire set’ approach and only focused on the halfedges that were actually being changed, namely, the 6 new inner halfedges, and parts of 6 other halfedges we needed to update to reflect the new halfedges. After several hours of checking variable names and pointers, we were able to exorcise the deformities from our cubes and torii. This left a single problem in the cube, of the one 3-degree edge becoming inverted and resulting in a spike at higher sampling levels. This was a result of using
3/8 as the value for
neighbors = 3 in the
neighbor_sum calculation instead of
3/16. Rectifying this fixed the last problem.