ChatGPT can write g-code

It’s probably not particularly useful, but ChatGPT can write g-code. I told it to write code to cut a 6" 5-point start centered on the workpiece .25" thick, and it wrote workable code.


Hey quesohusker,

I ask myself when will these useless ChatGPT posts stop. You are right, it would not be particularly useful. Why would you need a helper tool to write g-code? 1. It is such a simple language, and 2. every CAD/CAM software spits out a nicer toolpath that really implements what you wanted to do.

By the way:


Massive quality deficiencies in ChatGPT as a programming aid

A study reveals significant quality problems in ChatGPT’s answers to programming questions and warns against the use of AI chatbots in software development.

May 26, 2024, 11:54 a.m. , Andreas Donath

A study conducted by researchers at Purdue University has shed light on the accuracy and quality of ChatGPT [] in answering programming questions typically found on Stack Overflow. The results, published in the proceedings of the CHI Conference on Human Factors in Computing Systems [], raise significant concerns about the accuracy and conciseness of ChatGPT’s answers.

The research team led by Samia Kabir, David N. Udo-Imeh, Bonan Kou, and Tianyi Zhang conducted an in-depth analysis of 517 programming questions from Stack Overflow. They compared ChatGPT’s answers to these questions with the accepted answers given by human programmers on the platform.

The study used a mixed methods approach that combined manual analysis, linguistic analysis and user studies to gain comprehensive insights into ChatGPT’s performance.

ChatGPT’s struggle with accuracy

The manual analysis revealed that 52 percent of ChatGPT’s responses contained incorrect information. The researchers identified four main types of errors: conceptual (54 percent), factual (36 percent), code (28 percent), and terminology (12 percent). These results suggest that ChatGPT struggles to fully grasp the nuances within and context of programming questions, resulting in a high rate of inaccuracy in responses.

Interestingly, the study found that the frequency and age of programming questions influenced the accuracy of ChatGPT’s responses. Answers to common and older questions tended to have fewer errors than less common and newer questions. This suggests that the performance of ChatGPT depends to some extent on the amount of information available in the training data.

The researchers also found that while ChatGPT responses were often verbose, 77 percent contained redundant, irrelevant or unnecessary information. This verbosity was particularly evident in responses to conceptual questions and instructions. In contrast, debugging responses were more concise, even if they did not always match human responses.

Despite the high rate of inaccuracy, the user study found that participants preferred ChatGPT’s answers in 35% of cases. The reasons given for this were the comprehensive and well-formulated nature of ChatGPT’s responses and its polite language. However, participants also overlooked misinformation in ChatGPT’s responses 39 percent of the time, highlighting the potential risks associated with relying on AI-generated responses.

The results of the study emphasize the need for caution when using ChatGPT for programming-related tasks. The researchers emphasized the importance of verifying the correctness of ChatGPT responses before using them in real-world scenarios. They also called for further research into methods to detect and mitigate misinformation generated by AI chatbots. ■

© 1997-2024 All rights reserved.

– Source:

See also


Imagine that. Aiph5u ha and opinion.

About the ‘study’. I’m gonna guess you didn’t bother to go further and find the actual paper it references. My German is rusty but I note a few issues with their conclusions. First, nowhere in the paper does it say which version of ChatGPT they used. The earlies the work could have been done was 2023, so I would guess 3 or 3.5. Both are SIGNIFICANT downgrades from the current commericially available ChatGPT 4. And besides that, no one is using Gen AI to help them understand code. They are using it to write code. And that is immediately self-critiquing. Yes, sometimes the python code I get doesn’t work, but it always gets it right on the second try. Gen AI is literally going to change the world and you seem to not realize that.

I posted the comment as more or less a “hmmm, that’s interesting”. If it bothers you…

Hey quesohusker,

haha, you’re completely wrong here. You can do nice things with machine learning and generative systems (and I do too) but mainly if it’s you that selected the data to feed it for a specific purpose. There are big problems with public machine-learning bots in that the public doesn’t know with what they were fed, if they were allowed to use the data at all, and which biases are applied. It’s of course different if it’s you who selects and biases the data set.

I am a programmer. The generative AI tools that are available publicly today require that you understand and verify the code completely. You simply can’t trust it to be correct. I can leave out these tools and be done faster. But of course, it requires me to be a programmer.


And I hire econometricians and programmers…and I would rather be in my position than yours in five years. But you do you.

You know, you don’t have to win every argument or prove you’re the smartest guy in the room on every post. No one likes a know it all.

this post has not reached my level to deserve my attention. Nobody has attacked you and yet you feel attacked. Poor.

1 Like

So how do I know if it works the first time? If I run it and it fails, then ask a second time or simply assume it will be wrong the first time and ask twice? Wouldn’t I be better off letting a stupid tool like Fusion write my G-code because I’ve never had to ask it to try again to get a working G-code file? Or are gen AIs only getting python wrong the first time?

I think your missing it…
Generative AI doesnt necessarily perform as well as an industry expert but it breaks the barriers down to allow for significant cross industry collaboration and or polymath.

I have zero career experience with programming yet I can scrape, extract, load and transform any data I want. I can use all Python natural language processing libraries to analyze and interpret the data, I can use pandas and matplot and scikitlearn to manipulate and represent the data.

I can use computer vision libraries to make facial recognition systems and hand gesture recognition systems.

I can also make swift IOS apps, program in Rust, and do front and back end for web applications.

GEN AI will 100% uproot programming… it is immediately verifiable. if the library wasnt part of the training data, you can provide the model the developer documentations and continue onward. Automates docstrings and codenotes creation. The list goes on and on. Its like going from manual math to using a calculator… its the speed code.

Not only can anyone make great song lyrics, poems, stories etc, you can make artistic images, music, and applications.

Everyone has the capability to create whatever they want. It is really the dawn of the creation age

Hey Cybermancer,

what you write shows there are two kinds of people: The ones who deeply understand how “AI” and machine learning works, and the other who don’t. The difference results mainly in radically different expectiations for the future.

Of course, new technologies can be put to good use, and I think there is always a time when a new technology is demythified and becomes part of everyday life. Then the exaggerated enthusiasm that can be observed nowadays that concerns areas that the new technology will never fulfill also ends.

Of course you can use new techniques sensibly. For example, I use machine learning by taking a huge series of external camera images as a database and teaching the box to recognize when a particular car is coming.

I program a lot and often have to understand code that someone else has written. I like to read code from someone who has an excellent and sophisticated programming style. I admire that. I don’t want to study programs written by something that will never understand what it’s writing, that’s dumb as a slice of bread and has stolen everything from somewhere.

I like reading books. I have no desire to read something written by something that does not know and not understand what it is writing, and never will, that is as stupid as a piece of bread and has stolen everything from somewhere.

When I deliver code, or a book, I am responsible for every single letter of code. That’s why if I would take code that comes from somewhwere, be it from a copilot algorithm or from another programmer, I ALWAYS have to understand and reproduce every single letter of code. So either I write it myself from beginning and will be done faster, or I trust someone from which I know that they understand what they are writing. And the latter can never, never, never can be a condition fulfilled by a “AI”. That’s the difference if you know how AI works: The realistic future expectations.

1 Like

I haven’t coded since college. Really put that comp sci degree to good use.

I like how I can get in the ballpark with chatgpt or similar and just fix minor issues from there. Now, for me, that’s typically simple tasks like arduino jobs and home automation scripting. For something more elegant, critical, or needing to be perfectly written and structured, you need a pro. For a script to automate some doodad or upload this or that at intervals, chatgpt saves me from the previous hell of searching stackoverflow for the closest possible project and shoving it in to place.

Hey Echd, hey all,

maybe the difference to the youngsters of today is, I learned coding in 1981 and after Basic it was directly Z80 & Intel 808x Assembler and then K&R C, and then various other languages, including more assembler languages. I have not stopped coding since then. We had to learn to write elegant and sophisticated code. We had to know how a compiler internally works. We had not only to write code that works, but also code that someone else could easily follow and understand. A software company can depend on that. I know many examples of companies who really got problems when their main programmer left, as nobody actually knew the code, and they had not thought of writing it in a way that would enable someone else to follow and understand the code easily. Code has to be beautiful, both for the human and for the processor. Today they write banana code (delivered green, matures at the customers).

1 Like

Wait so which one are you and which one am I in this scenario?
I like getting accurate bank withdrawals… that doesn’t mean i’m going to bypass the ATM every time and force the bank teller to manually calculate my cash and change for me…

I know exactly how word embeddings and co-sine similarity, n-gram analysis, bag-of-words, sentiment analysis, k-means clustering algorithms work. I also know how object detection works with YOLO and can use the COCO dataset to detect a car, license plate and image of the driver and can use huggingface/transformers to fine tune on my own dataset as needed. I also know how to use PyTesseract OCR to extract the characters and can save it to an elastic search database with your facial embeddings. I can also run selenium and pyautogui to automate searching additional relevant data capture etc etc. At current stage, the AI makes you worse at the things your excellent at but makes you great at a lot of things if you are willing to expand your perspective.

Besides, you don’t need to know the deep physics behind every equation to have a valuable result… if you know the equation, it’s application, and the variables you are going to get something that may be interesting. I know the value for gravity is 9.8 m/s/s… I don’t have to understand and the solar systems impact of the earth’s gravitational value to use the number properly in equations.

If you think its as stupid as a piece of bread you clearly don’t understand word embeddings and the value of the dataset and are naive to think ideas are original… this isnt a vacuum… every idea is a dilution of ones experiences…

I’d recommend doing some research on BERT and retrieval augment generation and agentic systems… once you can comprehend that, you’ll have a sense of a what the future is. It’s probabilistic math… it’s all about the data set. if you want the results to be less bland, tweak the temperature values of the model.

I would love to have some samples of code from you and your colleagues… sounds like if I fine-tuned a model with that dataset, it would have great examples to replicate! Boost that bad boy with some relevant samples from Leetcode and Codewars and i’ll have a good tool to work with! I’ll throw in some unit tests for good measure/best practices :wink:

at the moment the purpose of this discussion escapes me

:triangular_flag_on_post: OT

I should have included that it would be beneficial to research chain of thought for prompt engineering- I find it helps mitigate hallucinations, especially when the model is near its token capacity. I thought your interjections with the following opened the door and warranted a rebuttal and was contributing why I disagreed with your opinion. Your response to OP indicated it was appropriate to express and communicate disagreement of opinion.


“I ask myself when will these useless ChatGPT posts stop”


Massive quality deficiencies in ChatGPT as a programming aid"


Gen AI is literally going to change the world and you seem to not realize that.
“haha, you’re completely wrong here.”


“what you write shows there are two kinds of people”

Best of luck

"According to the KPMG survey, 77% percent of executives believe that generative AI will have a bigger impact on broader society in the next three to five years than any other emerging technology. "

“When it comes to the impact of generative AI on the enterprise level, 78% of respondents say that it will have a high or extremely high impact on driving innovation”