Skip to content

Integrate with your application

When it comes to integrating Comet with your LLM powered application, there are two common paths:

  1. Add the integration as close to the model as possible, this can be in your inference pipeline if you are hosting your models or at an API Gateway level for example
  2. Add the integration in the LLM powered application, this has the added benefit of being able to log human feedback to the Comet platform for further analysis

We often recommend to go with integrating in the inference pipeline if this is an option as it means that all requests will be logged to a centralized location giving you more visibility into how LLMs are used in your organization.

Adding the integration to the LLM powered application has the added benefit of allowing you to log user feedback to the Comet platform for further analysis.

Once you have decided on the integration path, you can integrate by either using the comet-llm SDK or by utilizing log-forwarding.

Integrate with the comet-llm SDK

Integrating using the LLM SDK is the simplest approach as it take just two lines of code to log a prompt / response:

import comet_llm

comet_llm.log_prompt(
    api_key="<Your API Key>",
    prompt="<Your prompt>",
    output="<Your response>",
    metadata={
        "model": "llama2"
    }
)
This code is all you need to log a prompt to Comet, you can learn more about logging chains or chat conversations here.

You can use the metadata field to log human feedback, this information will then be available in the Comet platform for further analysis.

Integrate using log-forwarding

Utilizing log forwarding allows you to separate your production inference pipeline or application from your logging mechanism. This approach works well if you are working in a cloud environment where log forwarding is already a well defined concept.

Sagemaker

If you are using Sagemaker to host an open source model, you can use their data capture functionality to write the request and response payload to an S3 bucket. You can then implement an Python Lambda function with an S3 trigger to log the data to Comet. The lambda function can then use the comet-llm SDK described above to send the prompts to Comet.

The lambda function will first read data from S3 and then write it to Comet:

import os
import comet_llm
import urllib.parse
import boto3
import time
import urllib3


s3 = boto3.client('s3')


def lambda_handler(event, context):
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')

    # Read file from S3
    response = s3.get_object(Bucket=bucket, Key=key)

    body = response["Body"]
    data = body.read()

    # Extract prompt and output and metadata from data object, this is
    # dependent on the format of the API endpoint
    prompt = "<placeholder>"
    output = "<placeholder>"
    metadata = {
        "<placeholder>": "<placeholder>"
    }

    # Create MPM event
    comet_llm.log_prompt(
        prompt = prompt,
        output = output,
        metadata = metadata
    )

The full documentation for the comet_llm SDK is available here.

Log-forwarding

If you are not using Sagemaker, you can still apply a similar approach as described above except instead of having the prompt / response pair written to S3, we will write it to a logging system. From here we can then use the log forwarding tooling available with most common Logging tools to send prompts to Comet.

Feb. 9, 2024