How to create an AI chatbot using GPT-3

Introduction to GPT-3

One of the hot topics in a couple of years, and still relevant today, is the topic of chatbots. In short, a chatbot is software that interacts with computers and humans over the Internet, allowing them to communicate.

They are computer programs designed to simulate conversations with human users.

Chatbots have already become popular tools in different areas of businesses, including customer support centers. They are often used in online customer service to help customers resolve issues and answer questions.

GPT-3 is a general-purpose artificial intelligence platform that enables developers to train, publish, and monetize AI models.

GPT-3 is a highly advanced language model that is trained on a very large corpus of text. It is surprisingly simple to operate. You feed it some text, and the model generates more, following a similar style and structure.

Building a chatbot with GPT-3 and typescript is fun! In this post, we'll create an all-in-one chatbot using the GPT-3 library, Node.js, Next.js, and MongoDB.

Surprisingly, I never wrote a single word in that introduction that you just read. It was all generated by GPT-3.

If you want to see what GPT-3 is capable of, check out this interaction I had with the bot we're going to build in this post.

GPT-3 Chatbot implementation

What is required -You got to have the latest version of NodeJs installed. -You also need to have OpenAi keys. You can get that by signing up and copying them from the OpenAI’s dashboard.

Creating a NextJs app with typescript

On the terminal, run: $ npx create-next-app openAi-bot –typescript

After creating the nextjs app, open it with vs code.

Configuration

To use OpenAi, you will need access keys. Please note that these keys should not be used on the client side, for security reasons. Also, store the keys in a .env files

open the .env.local file and paste the keys as such: OPENAI_API_KEY= sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

After configuring the OpenAi keys, let us create an API route to test it. Create a file named generate.ts in the api folder.

The wonderful team at OpenAi has created an amazing model that is surprisingly good at generating text. All you need is to provide it with a prompt and a few examples of the kind of responses you would like to get. This forms the training data needed to create your chatbot from GPT-3. The following is an example of a prompt that I used:

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly. Human: Hello, who are you? AI: I'm good, thank you. How are you and how may I help you today?

We can then use the above prompt to train our chatbot and start generating messages. Let’s now write a function that makes a GPT-3 query. The code below shows how to use the OpenAi Api for generating text.

The completions endpoint can be used for a wide variety of tasks, such as providing a simple but powerful interface to any of their models.

You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it.

We are going to use their text completion endpoint.

To configure it, we provide the model we are to use. From their documentation, there are several models available and in our case, we will use the “text-davinci-002” model, which is the most accurate of them all.

Another parameter is the temperature, which controls how much randomness is in the output. In general, the lower the temperature, the more likely GPT-3 will choose words with a higher probability of occurrence. This means that we are likely to get very similar responses for a given prompt. A higher temperature value means the model will take more risks and result in varying responses for a given prompt.

The stop parameter refers to up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

You can read about the other parameters from their well-written documentation.

Testing the route

After creating the generate route, we can start our up by running: npx run dev

We can then proceed to test our endpoint (localhost:3000/api/generate) using a tool such as a postman.

Database

To store our messages, we will use MongoDB.

The first step is to create a database by having an account with Atlas MongoDB and getting MongoDb connection keys and saving them in the .env.local file. Remember to restart your development server every time you make some changes to the .env files.

After that, we can install the mongoose library to help in creating schemas for our data.

From there, we can create a folder in pages/api directory called config. Inside the config folder, we should create a file named dbConnect.ts and add the following code:

Creating Messages Schema and controllers

In the pages/api directory, we can create a folder named models. Inside it, let's create a file named messages.ts for our schema. The code below creates a schema for our messages.

In the pages/api directory, we then create a folder named controllers. Inside it, let us create a file named messages.ts and create the following functions.

Let's create messages by creating an index.ts file in pages/api/messages directory:

We will use the request with a “POST” method to save messages created by the human to the database. Finally, we can update our OpenAPI message generation route to save the text generated by the AI

When creating the front-end part of the app, we will send the prompt text as part of the request body to two routes (api/messages and api/generate ), using an async function. The first route will save the prompt as a message with the label “Human”, while the latter will generate an AI response and save the response as a message with the label “AI”. We can therefore use these labels to separate and display our messages on the user interface.

In conclusion, this post has discussed the process of creating a chatbot using AI. The team at GPT-3 has created a well-written APi that developers can play with and potentially create useful tools that can increase human productivity. I look forward to seeing all the cool tools that will be developed with GPT-3 as well as the next iteration; GPT-4.