|
Post by joitarani333 on Apr 30, 2024 7:49:05 GMT 3
AI Streamer sponge return new Streaming Text Respon sestream A bit of clarification First we provide an API key to configure the Open-air instance const open-air new OpenID apiKey process. OPENAIeAPIeKEY With this line we tell Next.js to use the ege runtime to handle API requests export const runtime ege To extract data from the messages field that comes in the body of the JSON request we use the POST function. Next we use the opened. Chatpletions. Create method to send these messages to the gpt. Turbo model for further processing. This method generates Restaurant Email List responses base on the messages entere. We set the stream parameter to true because this allows us to transmit responses in real time. After receiving a response from OpenAI the code transforms it into a convenient text stream format using the Open AI Stream function. Finally the function generates a response including text streams create by OpenAI in response to user requests through the Streaming. Text Response class and sends it back export sync await req. Json cons response await Openai. Chatpletions. create model gpt. turbo stream true messages cons stream Open AI Streamre sponse return new Streaming Text Respon sestream We now have an API route to accept and process user requests using the OpenAI API. Lets move on to creating the UI for our bot. Developing a user interface Before starting to work with the UI lets define constants for the initial chat messages which we will use as a custom command to direct the behavior of the bot being develope.
|
|