lunes, 8 de abril de 2024

GENERATIVE API USING BEDROCK PART 1 (TEXT)

 

    1.- COHERE GENERATIVE MODEL

 

URL:

https://3glika507i.execute-api.us-east-1.amazonaws.com/default/Bedrock

 

AUTHENTICATION:

Basic Authentication

Username: Administrator

Password: ********

 

BODY:

{

    "prompt": "How many states does Mexico have?",

    "text":"",

    "max_tokens": 300,

    "top_k": 0,

    "temperature": 0.9,

    "top_p": 0.75

}


 

Where:

Prompt = It is the command or instruction that is given to the generative model.

text = It is the information on which the generative model will work. If you only ask questions, leave this field empty.

max_tokens = Specifies the maximum number of tokens to use in the generated response.


top_k = Specifies the number of token options that the model will use to generate the next token. Default=0, Minimum=0, Maximum=500.

temperature = Use a lower value to reduce the randomness of the response. Default=0.9, Minimum=0, Maximum=5.

top_p = Use a lower value to ignore less likely options. Set it to 0 or 1. 0 to disable it. Default=0.75, Minimum=0, Maximum=1.








2.- ANTHROPIC GENERATIVE MODEL


URL: 

        https://3glika507i.execute-api.us-east-1.amazonaws.com/default/Bedrock2

 

AUTHENTICATION:

Basic Authentication

Username: Administrator

Password: ********

 

BODY:

{

    "prompt": "How many states does Mexico have?",

    "text":"",

    "max_tokens": 300,

    "top_k": 250,

    "temperature": 0.5,

    "top_p": 1

}

 

Where:

Prompt = It is the command or instruction that is given to the generative model. Supports multilanguage Spanish, English, etc.

text = It is the information on which the generative model will work. If you only ask questions, leave this field empty. Supports multilanguage Spanish, English, etc.

max_tokens = Specifies the maximum number of tokens to use in the generated response.

top_k = It is used to eliminate long-tailed low probability responses. Default=250, Minimum=0, Maximum=500.

temperature = The amount of randomness injected into the response. Default=0.5, Minimum=0, Maximum=1.

top_p = It calculates the cumulative distribution among all options of each subsequent token in order of decreasing probability and cuts it off once it reaches a given probability. Default=1, Minimum=0, Maximum=1.







PRICING

 

MODEL PROVIDER

MODEL

per 1000 input tokens

per 1000 output tokens

Cohere

Command

0.0015 USD

0.0020 USD

Anthropic

Claude(v2.0, v2.1)

0.00800 USD

0.2400     

  

No hay comentarios:

Publicar un comentario