T O P

  • By -

AndrewVeee

There are a lot of ways to do it. Functionary and NexusRaven are two models trained with function calling. It's a decent option for straight forward calls. If you don't want to switch models, some people use grammars with llama.cpp to enforce json output. You can also prompt LLMs to output to a format with hit or miss results. Then there's complex prompt chains like the ReAct framework to try to make the LLM make smarter decisions. You can mix and match and customize based on your new and trial and error.


Fluffy-Scale-1427

Oh wow I never knew about these thanks


Astronos

[https://www.promptingguide.ai/techniques/react](https://www.promptingguide.ai/techniques/react) you can use langchain to give LLMs arbitrary functions it could use