If you are totally dependent on OpenAI’s API key to power your AI apps, not only you lack most of the features that other platforms offers, but you are also at a high risk of being vanished if they decide to increase the API cost (like what Twitter did few weeks earlier). And this is yet not the end, you are always at a risk of breach of data laws and confidentiality. (Read how OpenAI’s new data laws are nothing but thin air : https://sttabot.io/openais-data-… ) Now, when we launched Supervised (Sttabot v2.0), one user pointed out this issue and suggested to work on a tech called Local LLMs. Over the time, we worked on it and built this product. This is way simple but revolutionary where you can currently build AI apps using what more is offered by OpenAI or deepmind. In coming updates, we are planning to introduce two major things in Local LLMs – ‘On-site deployment’ and ‘LLAMA integration’. Currently, the AI you build, has to be deployed on services like PyScript.com (don’t worry, we have already pulled a detailed installation instruction up there). However, in coming updates, we will let you deploy your AI on cloud from Sttabot only. Second, we are yet to train the model to build AI apps using LLAMA. This will be breakthrough because of its potential and open source nature. We will also add this in coming updates. So far, Local LLMs have been tested to 210 users at Sttabot and is rated a whooping 4.9/5.0. With this release, the feature will be public for all our 10,000+ users as well as for those who are trying Sttabot for the first time.