A no-BS guide for complete beginners
Chatbots are becoming more powerful and accessible than ever. In this tutorial, you’ll learn how to build a simple chatbot using Streamlit and OpenAI’s API in just a few minutes.
Prerequisites
Before we start coding, make sure you have the following:
- Python installed on your computer
- A code editor (I recommend Cursor, but you can use VS Code, PyCharm, etc.)
- An OpenAI API key (we’ll generate one shortly)
- A GitHub account (for deployment)
Step 1: Setting Up the Project
We’ll use Poetry for dependency management. It simplifies package installation and versioning.
Initialize the Project
Open your terminal and run:
# Initialize a new Poetry project
poetry init
# Create a virtual environment and activate it
poetry shell
Install Dependencies
Next, install the required packages:
poetry add streamlit openai
Set Up OpenAI API Key
Go to OpenAI and get your API key. Then, create a .streamlit/secrets.toml file and add:
OPENAI_API_KEY="your-openai-api-key"
Make sure to never expose this key in public repositories!
Step 2: Creating the Chat Interface
Now, let’s build our chatbot’s UI. Create a new folder: streamlit-chatbot, and add a file to it, called app.py with the following code:
import streamlit as st
from openai import OpenAI
# Access the API key from Streamlit secrets
api_key = st.secrets["OPENAI_API_KEY"]
client = OpenAI(api_key=api_key)
st.title("Simple Chatbot")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display previous chat messages
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Chat input
if prompt := st.chat_input("What's on your mind?"):
st.session_state.messages.append(
{"role": "user", "content": prompt}
)
with st.chat_message("user"):
st.markdown(prompt)
This creates a simple UI where:
- The chatbot maintains a conversation history.
- Users can type their messages into an input field.
- Messages are displayed dynamically.
Step 3: Integrating OpenAI API
Now, let’s add the AI response logic:
# Get assistant response
with st.chat_message("assistant"):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": m["role"],
"content": m["content"]} for m in st.session_state.messages
])
assistant_response = response.choices[0].message.content
st.markdown(assistant_response)
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": assistant_response})
This code:
- Sends the conversation history to OpenAI’s GPT-3.5-Turbo model.
- Retrieves and displays the assistant’s response.
- Saves the response in the chat history.
Step 4: Deploying the Chatbot
Let’s make our chatbot accessible online by deploying it to Streamlit Cloud.
Initialize Git and Push to GitHub
Run these commands in your project folder:
git init
git add .
git commit -m "Initial commit"
Create a new repository on GitHub and do not initialize it with a README. Then, push your code:
git remote add origin <https://github.com/your-username/your-repo.git>
git push -u origin master
Deploy on Streamlit Cloud
- Go to Streamlit Cloud.
- Click New app, connect your GitHub repository, and select
app.py. - In Advanced settings, add your OpenAI API key in Secrets.
- Click Deploy and your chatbot will be live! 🚀
Conclusion
Congratulations, you’ve built and deployed a chatbot using Streamlit and OpenAI. This is just the beginning — here are some ideas to improve it:
- Add error handling for API failures.
- Use different GPT models for varied responses.
- Allow users to clear chat history.
- Integrate RAG into it
I hope you enjoyed this tutorial! If you found it helpful, feel free to share it.
The full code is available here.
Feel free to reach out to me if you would like to discuss further, it would be a pleasure (honestly):

Leave a comment