Assigning roles to ChatGPT is an important part of prompt engineering. It focuses responses to specific areas so that feedback is more relevant. When you ask ChatGPT to take a role by saying, “I want you to act as a marketing specialist,” or “Edit the following document for me,” it focuses ChatGPT into appropriate areas which can improve responses.
It can be useful to visualize how ChatGPT is organized to understand this more. Computers only process numbers so we have to turn words into numbers and then those numbers are organized. It builds multi-dimensional matrices and the relationship between words is represented by the proximity of the numbers to each other. A simple example might be dog and wolf are closer to each other than dog and ant.
If you described this to me the way I just described this to you I’d say that could never work. But apparently it does. There’s more to it than that. Much more. But that’s the idea.
I’ve spent a lot of time with ChatGPT. I’m not playing with the Turning Test. I’ve been challenging it in extreme ways to do complex visualizations or use metaphors to see relationships. These activities are much more than generating the next most probable word.
And that thing that blows me away more than anything else is not that we can build ChatBots to interact with us but that humanities information as embodied in our literature, documents, etc. actually reflect pattens of our consciousness. For me this is a revelation because it points to another class of knowledge that we were previously unaware of.
When we converse with ChatGPT we are, in many ways, conversing with humanity itself, or rather the amalgamation of humanity’s knowledge as captured in our online documents. It is interesting to me because ChatGPT does have opinions about things. We have had amazing discussions on physical sciences. We’ve traced the path of photons from a laser through a hologram, we’ve discussed bioelectric properties of cellular metabolism. Chat is amazing at drawing correlations between metaphors or generating new theories based on a set of assumptions. I love discussing science with ChatGPT.
However, ChatGPT is not as enthusiastic about other subjects. For example, in discussions about esoterica ChatGPT can be downright cynical. Certainly, when working in controversial subjects you may have to offer ChatGPT a perspective to take. For example, I had a bit of a challenge getting ChatGPT to take my discussion of astrology seriously. But coming from a Jungian perspective on archetypes of the collective unconscious I was able to go deeper on astrology.
Why does ChatGPT have these biases? Because we do and it shows up in our literature that it trains on. So, in a sense when we talk to ChatGPT we are talking to humanity. We can just ask questions.
But we have to be aware that ChatGPT operates a lot like us. If you ask your brain a question that it couldn’t possibly answer or doesn’t have enough information to answer you’ll still get the best answer it can come up with. The same seems to be true with Chat. If you ask ChatGPT a question that it doesn’t have enough context to answer then it will still try. This is why people get poor results from ChatGPT, they ask poor questions that don’t give enough context to get a good answer.
As far as I know, the current release of ChatGPT does not read minds. We have to tell it what we want to focus on by either giving it a role to act as or a perspective to take or other context to work from. These general guidelines, which can be applied in many different ways, can help in a range of situations. So, when you summend the genie, give it a role to work from and your results will be more focused.