Build A Local LLM

Build fully custom AI application using top machine learning libraries like PyTorch, Tensorflow, Huggingface, OpenAI, etc. This tool will generate all the frontend and backend code of your AI. Follow the installation instructions to host the AI live on web.

Your Code :

Deploy A Local LLM On Web

Congrats on generating the LLM code! The next step is to deploy this LLM on web. Follow the below instructions :

General Instructions :

Codes generated for local LLMs are mostly in HTML and Python language. Therefore, to create a web deployment, we will utilise a web-based deployment framework called PyScript. This source material will guide you to achieve this goal.

Important URLs :

Pyscript framework :
Pyscript platform :

Simple Steps :

First, create a free account on and create a new project.
The project comes with 3 files : index.html; & pyscript.toml. 

Installation of packages :

Every AI/LLM system will require you to install several libraries AKA packages into the environment. The information of the packages are given at the end of the code base by Sttabot. For example, if your LLM requires the following package to be installed : 

					pip install torch flask

You will need to go to the pyscript.toml file and place the following code there : 

					package = ["flask" , "torch"]

Building the frontend :

You will then need to go to the index.html page and paste all the HTML, CSS and Javascript code there. If you do not have a combined code, you can always add a separate file for javascript and css by the name script.js and style.css.

Building the backend :

Finally, copy the Python (backend) code generated from Sttabot and paste it in the file. 

Final Deployment:

Now click on Save & Run button and your chatbot is live. Copy the URL and share it with anyone.

Signup at Sttabot for free..

Signup For Local LLMs