Spit a Rap.

No one enjoys the delay of API loading times. Presenting a chatbot that lets you craft a rap song by simply inputting a city and name of your choice. By utilizing streaming, we've significantly enhanced the user experience of this chatbot. Developers often face latency issues while building applications with Large Language Models (LLMs), as multiple API calls can slow the process down, leading to a poor user experience. Streaming can minimize this perceived delay by serving outputs as they're generated, token by token, in real-time. Although it doesn't reduce the total response time, it improves user perception by showing progress instantly, especially in chat applications.