Limitations

ChatGPT Limitations

Note: I wrote the following blog post in August and then ran into technical issues. For more details see the note at the end of this post.

I co-wrote the first draft of Prompt Engineering for Everyone using the free version of ChatGPT 3.5. I’ve used version 3.5 exclusively to write and edit the first edition of the book. This was a big challenge because of the model’s memory constrains so we could only write about a page at a time together. My prompts tend to be long, up to a page, so I was up against that limit also.

But ChatGPT 4.0 has changed all of this. ChatGPT has a new feature called Code Interpreter which actually includes several major enhancements that have nothing to do with coding. One feature of Code Interpreter is that it can not only write code but it has a temporary sandbox where it can also run code, evaluate the results, and retry. But is also has other important features. It can work with graphics and manipulate images in a limited way. And this sandbox is 150 MB of temporary storage which means I can upload large files and work with them.

I’m sure there are limitations to what we can do within the sandbox but those limitations are far beyond what version 3.5 could do. There are still limitations to the length of input or output that the model can handle at one time. It’s still common for Chat to generate a lengthy answer and stop in the middle because it reached its token limit. When that happens simply say, “Continue,” and Chat will pick up where it left off.

Here are some key differences between ChatGPT-3.5 and ChatGPT-4.0 and how the Code Interpreter’s functionality addresses some of the limitations.

The ChatGPT 3.5 model contains 175 billion parameters. It was not able to fine-tune itself for specific tasks or execute code that it wrote. Its 4k token limit meant that it would often lose context in the middle of a conversation. In my experience, just when things started to get interesting Chat would lose the conversation context and we’d have to start over again.

But no more.

ChatGPT 4.0 is more than an incremental upgrade. And Code Interpreter on top of GPT 4.0 is a quantum leap in ChatGPT’s usefulness. The ChatGPT 4.0 model contains over 6 trillion parameters. It can fine-tune itself for specialized tasks. It ups the token limit to 8K and there’s also supposed to be a 32K model which I haven’t seen yet.

But the recent release of ChatGPT 4.0 Code Interpreter removes the need for large memory models for some situations. It also has some rudimentary visual capabilities and cannot only write code but also run it in a sandbox. Code Interpreter is also capable of recovering from mistakes and trying different strategies to accomplish a goal when one doesn’t work.

I can upload multiple files and file types into code interpreter, up to 150 MB! This is not permanent storage but lasts long enough to work with in a session.

Here’s a visual representation of the differences between ChatGPT-3.5 and ChatGPT-4.0:

FeatureChatGPT-3.5ChatGPT-4.0
Model Size (Parameters)175 billion6 trillion
Computation PowerHighOptimized
Fine-TuningLimitedImproved
Code InterpreterNoYes
Context LengthLimitedExtended
Multimodal Capabilities

Text only

Text and basic visuals

Note: Advanced Data Analysis now seems to have limitations on how much it can download from a file. I am no longer able to get ChatGPT to read more than a few dozen pages. Hopefully, that will change soon.