Radial Marking Menu for VR
Over the course of a two-week full-time research position, I developed a radial marking menu as a Unity package for use in VR applications. The menu is fully customizable by the user.
Navigation and selection is simple and intuitive; holding down the menu button opens the menu, and you can move your controller to select an option. Hovering over a submenu (marked with an outwards arrow) will take you to that submenu. Releasing the menu button while hovering over an element activates that element, triggering its corresponding Unity event. The activation of functions through a single connected movement allows for the development of muscle memory, allowing for users to access tools and options without thinking.
Users can create menu items with icons or labels that trigger public functions in the Unity scene, as well as create nested submenus to organize them. This allows the marking menu package to be used for practically any purpose.
In order to allow the UI to seamlessly accomodate user settings and change on-the-fly, I programmed a custom shader that renders the back of the UI entirely through code. It modifies the number of sections, which sections are submenus, and the size of the menu all by changing the parameters attached to the shader.
Full documentation was also created for the package, complete with a manual and scripting API.
Cornell Tech
XR Collaboratory Research
Winter 2023
A video demonstration of the package. Primitive shapes (both in the background & attached to the tool) are used to demonstrate the menu’s functionality; selecting a shape in the menu triggers functions in the scene that will make the corresponding shape in the background teleport, and will also attach that shape to the controller.
The package has an optional “expert mode” (left) where the menu will start out hidden before expanding to show all icons after a short amount of time (right). This encourages development of muscle memory by discouraging reliance on reading icons, and allows for expert users to navigate & activate functions without ever opening the full menu, reducing distractions.
A large amount of settings are made visible in the inspector for users to customize.
A demonstration of the submenu system; hovering over “More Options” (right) will lead you to the left menu, while hovering over “Back” (left) will take you back to the right menu.
The menu customization system; the user can define an arbitrary number of menu elements, each with names, icons, and events, as well as an arbitrary number of submenus.
Creative Tools for VR
For our final project in INFO5340: Virtual and Augmented Reality, I worked in a team of 4 to create a 3D modeling app in Unity for the Quest 2. We were tasked with creating a modeling app that replicated features and interaction techniques from Gravity Sketch, a popular VR app. Starting from Unity’s basic XR Interaction Toolkit, we implemented a swath of features enabling complex navigation, selection and manipulation of objects.
As the only member of our team with previous Unity experience, I took a leadership role in the team, delegating tasks and helping my team learn how to use the program more effectively. I also assisted with debugging & troubleshooting throughout our project.
My other responsibilities included:
Implementing a 3D color visualizer to change object colors
Implementing undo/redo using a command-based system
Implementing an input action controller & state manager
Using Blender to design and model custom assets when needed
Making numerous design/UX improvements throughout all of our app’s features
Our work was chosen as one of the top 2 projects from the class to be showcased in Cornell Tech’s open studio event.
This information can also be found on our project page.
Our feature walkthrough, where a team member describes and demonstrates all of the features we implemented.
Cornell Tech
INFO5340: Virtual and Augmented Reality
Fall 2023
Our app demonstration, where a simple car is built from scratch in 5 minutes.
Highlighted Features
Users can rotate the thumbstick to unwind time, traversing the scene’s edit history quickly and intuitively.
In mesh editing mode, users can grab vertices to edit primitive objects and create more complex meshes.
Users can easily change the color of grabbed objects by moving their controller within a 3D color visualizer.
In scaling mode, users can grab constrained axes to quickly adjust the size of objects.
Free Flowing Particles
A physics simulation created as part of a 4-member team. Particles of disparate mass and color swirl and collide in a contained environment, driven through invisible currents and wind tunnels drawn by the user.
The app was coded in Taichi, a domain-specific language embedded in Python that enables GPU-based parallel processing.
I was responsible for overall optimization, implementing improved logic and identifying errors in our algorithms, causing our application to run up to 20-30 times faster at high particle counts. I implemented spatial subdivision in our collision algorithm, and optimized user interactions to only iterate over nearby particles. These changes, among others, enabled us to massively increase the scale of our demonstrations - simulating up to half a million particles on a high-end laptop while maintaining 60 FPS.
Visualization of spatial subdivision cells.
Cornell University
CS5643: Physically Based Animation for Computer Graphics
Spring 2023
Drawn lines create wind tunnels, each of which only iterate over nearby spatial cells.
3D to Sketch
“Can we train a machine to take a 3D model and create 2D perspective sketches of it that look as if they were drawn by a human?”
cGAN model created as part of research investigating the use of synthetic data for training sketch-to-3D ML algorithms. Given as input an image of a 3D structure and its normal map, the model outputs a sketch with realistic errors and deviation that appears to be drawn by a human. The algorithm avoids sketching fine details of the structure, to allow for any structure to be used to create a simplistic sketch.
To generate the training data, I used the parametric modeling program Rhino and its visual scripting language, Grasshopper. I generated random structures with both “complex” and “simple” details, then created two images; one of the “complex” model, and one of the “simple” sketch. The ground truth sketches are generated using a mathematical approach to approximating human sketches (see bottom right), referenced from this paper. With this method, I could generate thousands of training pairs in a few hours, vastly speeding up the training process for the model.
The model was then built using pix2pix, a common network architecture for image-to-image translation. The generated sketches could reliably identify “major” edges, while avoiding fine details that were not critical to the overall shape.
Complex model generation process.
Cornell University
Independent Research
Spring 2022
Adding realistic deviation to a wireframe sketch.