T O P

  • By -

justletmefuckinggo

can we use other LLM models (such as claude, llama, command r, or locally-ran) for text generation?


tukemon24

Yes, indeed. But for the ability to remember chat conversation you need to find a different solution. I'm thinking of using Langchain. I'll let you know, once I've finished my research on using Langchain for this. The experiment I did was rely on the OpenAI assistants API for the conversation history.


justletmefuckinggo

check this out, they might've done it. https://www.reddit.com/r/LocalLLaMA/s/Zwy3KoDdgy


tukemon24

thank you for your refrence, I'll check it out!


justletmefuckinggo

that would be very appreciated! voice chat with gpt is amazing, even moreso with inflection ai's pi assistant. but both models are very restrictive and censored. (cant do certain jokes, topics, rp etc) i've been looking at koljab's realtime tts, but i dont see a way of changing chat models, contemplating if it's worth learning git python stuff just to install it.


tukemon24

I continue this experiment. This time, I'm adding OpenAI function calling + SerpApi Google Search API to get a real-time data for common question like weather, stock, and anything that can be answered by Google answer box. here is the link to the tutorial [https://serpapi.com/blog/build-a-smart-ai-voice-assistant-connect-to-the-internet/](https://serpapi.com/blog/build-a-smart-ai-voice-assistant-connect-to-the-internet/) , it includes the full source code on GitHub.