Disagree. The short term solution is for you to change your prompt but it is definitely a short-coming of the AI when the answer is strictly useless.
It’s like crime: it should be safe everywhere anytime because of police and laws, but since it’s not, you can’t go everywhere anytime. That’s not on you, but you have to deal with it.
Different language models input prompts differently.
Other models will input your question, break it down internally, figure out what you’re really asking and then spit out an answer. It still might not be right but it will give you a better answer than that.
We’re ragging on a wish.com version of a model and it’s response.
Disagree. The short term solution is for you to change your prompt but it is definitely a short-coming of the AI when the answer is strictly useless.
It’s like crime: it should be safe everywhere anytime because of police and laws, but since it’s not, you can’t go everywhere anytime. That’s not on you, but you have to deal with it.
Different language models input prompts differently.
Other models will input your question, break it down internally, figure out what you’re really asking and then spit out an answer. It still might not be right but it will give you a better answer than that.
We’re ragging on a wish.com version of a model and it’s response.