• More Special Variable Functions for prompt engineering: perhaps with the “–” delimiter as in Midjourney so that we can quasi-codify the outcomes to be more predictable
• Context-based Text-to-CAD: perhaps we can use something like ReactFlow or SvelteFlow as a UI basis to enable building text-to-CAD assemblies in the browser
• Model Metadata readouts: the most contextual thing I can think of is Minecraft’s debug overlay, it’s a very similar, yet data rich UI feature which I think would be great to readout factors of a given model like total area, volume, etc.
• Ability to save/delete models (perhaps this already exists with the QR-code function)
• Don’t tax users on non-computed models: I’m running out of my n/30 on the free tier fast by testing the extent of the capabilities, but I can’t learn enough because I’m getting the Null Feedback
Can you clarify what this would help you achieve? I’m not sure I fully understand.
Right now it’s better at single parts than assemblies, for various reasons including that the tools it is built on are also designed for single objects. Assemblies is a cool idea and would probably need to be its own mode.
This is a great idea and was on my list. At the very least, shouldn’t be hard to do bounding box dimensions and model volume.
Re: saving - I’m working out the per-user database permissions, but for now just click the share link and copy/paste somewhere.
Re: delete - this will be available once I can set the database to only allow users to access their own creations.
This is great feedback. I think the best way to solve it is simply to reduce the error rate. The challenge with not charging for errors is that I still have to pay OpenAI for inference. There is a very strong cost vs. performance function for LLMs, so making the Free Tier LLM cheaper will just mean even fewer good results. Also, the more Pro subscribers that we have, the more generous we’ll be able to be with the Free Tier.
Can you clarify what this would help you achieve? I’m not sure I fully understand.
At the end of a prompt, it would be neat to be able as classifies that classify/increase specificity out the return in a programmatic fashion, something of a global parameter system.
Some examples might be:
“-- wt: 18ga” (returns a sheet metal part with a constant wall thickness of 18 gauge or 1.2mm)
“-- face_mate “McMaster-Carr”” (RAG will retrieve COTS component, parse .step to get mating parameters[in this case: flange diameter, fastener pattern, fastener through hole? {y/n}, fastener diameter, fastener thread pitch] and return a part that mates this part at some specified orifice in the prompt, perhaps it might embed the COTS part)
“-- def_nested_<{user_string_in_prompt}> “McMaster-Carr”” user can define a nested subpart, such as a COTS gasket for which gets subsumed in the returned part as static geometry