Vicuna 13B performance is an order of magnitude below ChatGPT for all but gimmicky conversational stuff. Try giving both somewhat large, task-based prompts with steps and see what happens.
> Vicuna 13B performance is an order of magnitude below ChatGPT for all but gimmicky conversational stuff.
Until you connect it to external resources, I tend to think of anything you do with “brain-in-a-jar” isolated ChatGPT as gimmicky conversational stuff.
Maybe I should have phrased that better! I didn't mean that Vicuna was comparable to ChatGPT, just that it's the best Llama-based comparison you can make (since it's at least been conversationally trained).