Free Flowing Particles

A physics simulation created as part of a 4-member team. Particles of disparate mass and color swirl and collide in a contained environment, driven through invisible currents and wind tunnels drawn by the user.

The app was coded in Taichi, a domain-specific language embedded in Python that enables GPU-based parallel processing.

I was responsible for overall optimization, implementing improved logic and identifying errors in our algorithms, causing our application to run up to 20-30 times faster at high particle counts. I implemented spatial subdivision in our collision algorithm, and optimized user interactions to only iterate over nearby particles. These changes, among others, enabled us to massively increase the scale of our demonstrations - simulating up to half a million particles on a high-end laptop while maintaining 60 FPS.

Visualization of spatial subdivision cells.

Cornell University
CS5643: Physically Based Animation for Computer Graphics
Spring 2023

Drawn lines create wind tunnels, each of which only iterate over nearby spatial cells.

3D to Sketch

“Can we train a machine to take a 3D model and create 2D perspective sketches of it that look as if they were drawn by a human?”

cGAN model created as part of research investigating the use of synthetic data for training sketch-to-3D ML algorithms. Given as input an image of a 3D structure and its normal map, the model outputs a sketch with realistic errors and deviation that appears to be drawn by a human. The algorithm avoids sketching fine details of the structure, to allow for any structure to be used to create a simplistic sketch.

To generate the training data, I used the parametric modeling program Rhino and its visual scripting language, Grasshopper. I generated random structures with both “complex” and “simple” details, then created two images; one of the “complex” model, and one of the “simple” sketch. The ground truth sketches are generated using a mathematical approach to approximating human sketches (see bottom right), referenced from this paper. With this method, I could generate thousands of training pairs in a few hours, vastly speeding up the training process for the model.

The model was then built using pix2pix, a common network architecture for image-to-image translation. The generated sketches could reliably identify “major” edges, while avoiding fine details that were not critical to the overall shape.

Complex model generation process.

Cornell University
Independent Research
Spring 2022

Adding realistic deviation to a wireframe sketch.