Welcome to LLM Spark (opens in a new tab) documentation, the development platform for building Production Ready LLM Apps. LLM-Spark offers a range of powerful features, including Multi LLM Prompt Testing, Connecting External Data to LLM, Team Collaboration, Version Control, Observability Tools and Template Directory for leveraging prebuilt templates. With LLM Spark, you have the tools you need to streamline your development process and bring your AI ideas to life with ease.
Create Your Workspace: The first step is to set up your workspace. This is where you'll work with LLM Spark.
Enter Your LLM Keys: You'll need to provide your LLM keys to authenticate and access the language models. Ensure that you have the necessary credentials in place.
Test Prompt on Multiple Scenarios: Start testing your prompts with different scenarios using various language models. This is the stage where you can adjust your prompt & inputs to get the desired outputs.
Deploy on Production: Once you're satisfied with your prompt and it meets your requirements, you can deploy it for your production use case.
Monitor Usage with Analytics: After deployment, you can monitor the usage and performance of your prompt with analytics. This allows you to track how well your deployed prompt is working and make adjustments as needed.
Prompt Testing across multiple scenarios with Multi LLM