Cheap, Rapid Iterative Prompt-testing with Parallel Mutli-Kernel Parasolid Code Generation with a variety of Code-CAD Packages

It hard to learn how to use the LLM semantically with the high-cost of query.

How might we first able to qualify the structure by which we write text-to-CAD prompts before ‘spending’ the expensive query budget?

Perhaps we might embed an window that can take a ‘test prompt’ and have a LLM generate a OpenSCAD, CadCad, Build123D… etc. code whereby the breadth of the interpreted prompt can be read by the user in-depth so they might learn if they are providing too little or too much detail in their prompts

For example, I can use codelets like:

// Parameters for the cylinder tank vessel

radius = 1167.54; // mm

height = 2335.09; // mm

// Cylinder creation

cylinder(h = height, r = radius, $fn = 100);

in the OpenSCAD Cloud from Autodrop3d browser viewer to quickly verify that the codelet works

A few thoughts on this.

  1. In the context of neThing.xyz, the expensive part is the LLM inference. So if you get close to something with the prompt, but it’s not quite right, then the free path forward is to edit the code manually.

  2. Some of what you are after would be helped tremendously by just adding an “edit” mode, where there is a chat history. (Right now each prompt is a separate LLM instance without any memory.)

  3. One way that I have been playing around with collaborating with AI on “code CAD” is using https://cursor.sh. You can even install the OCP CAD Viewer extension if you want to preview the CadQuery or Build123D geometries locally.

  1. One way that I have been playing around with collaborating with AI on “code CAD” is using https://cursor.sh. You can even install the OCP CAD Viewer extension if you want to preview the CadQuery or Build123D geometries locally.

Cool!