curl --request POST \
--url https://api.example.com/api/v1/organizations/{organization_id}/jobs \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"base_model_id": "<string>",
"dataset": {
"answer_column": "<string>",
"id": "<string>",
"prompt_column": "<string>"
},
"name": "<string>",
"dry_run": false,
"hyperparameters": {
"batch_size": 8,
"best_checkpoints": true,
"learning_rate": 0.00001,
"lora": {
"alpha": 8,
"dropout": 0,
"enabled": false,
"r": 8,
"trainable_modules": [
"<string>"
]
},
"mask_prompt_labels": false,
"n_epochs": 1,
"n_evals": 1,
"warmup_ratio": 0,
"weight_decay": 0.01
}
}
'{
"dry_run": true,
"estimated_usage": {
"cost": 123,
"tokens": 123
},
"job": {
"base_model": "<string>",
"base_model_id": "<string>",
"base_model_name": "<string>",
"completed_at": "2023-11-07T05:31:56Z",
"created_at": "<string>",
"dataset": {
"answer_column": "<string>",
"id": "<string>",
"prompt_column": "<string>"
},
"hyperparameters": {
"batch_size": 123,
"best_checkpoints": true,
"learning_rate": 123,
"lora": {
"alpha": 123,
"dropout": 123,
"enabled": true,
"r": 123,
"trainable_modules": [
"<string>"
]
},
"mask_prompt_labels": true,
"n_epochs": 123,
"n_evals": 123,
"warmup_ratio": 123,
"weight_decay": 123
},
"id": "<string>",
"name": "<string>",
"started_at": "2023-11-07T05:31:56Z",
"status": "queued"
}
}Creates a new fine-tuning job in an existing workspace.
curl --request POST \
--url https://api.example.com/api/v1/organizations/{organization_id}/jobs \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"base_model_id": "<string>",
"dataset": {
"answer_column": "<string>",
"id": "<string>",
"prompt_column": "<string>"
},
"name": "<string>",
"dry_run": false,
"hyperparameters": {
"batch_size": 8,
"best_checkpoints": true,
"learning_rate": 0.00001,
"lora": {
"alpha": 8,
"dropout": 0,
"enabled": false,
"r": 8,
"trainable_modules": [
"<string>"
]
},
"mask_prompt_labels": false,
"n_epochs": 1,
"n_evals": 1,
"warmup_ratio": 0,
"weight_decay": 0.01
}
}
'{
"dry_run": true,
"estimated_usage": {
"cost": 123,
"tokens": 123
},
"job": {
"base_model": "<string>",
"base_model_id": "<string>",
"base_model_name": "<string>",
"completed_at": "2023-11-07T05:31:56Z",
"created_at": "<string>",
"dataset": {
"answer_column": "<string>",
"id": "<string>",
"prompt_column": "<string>"
},
"hyperparameters": {
"batch_size": 123,
"best_checkpoints": true,
"learning_rate": 123,
"lora": {
"alpha": 123,
"dropout": 123,
"enabled": true,
"r": 123,
"trainable_modules": [
"<string>"
]
},
"mask_prompt_labels": true,
"n_epochs": 123,
"n_evals": 123,
"warmup_ratio": 123,
"weight_decay": 123
},
"id": "<string>",
"name": "<string>",
"started_at": "2023-11-07T05:31:56Z",
"status": "queued"
}
}Bearer HTTP authentication. Allowed headers -- Authorization: Bearer <access_token>
The ID of the base model.
The name of the job.
If true, the estimated usage of the job will be returned, but the job will not be created.
The hyperparameters used to configure the job.
Show child attributes
The number of examples used in one iteration of training.
x >= 1Whether to save the best model weights during training based on validation performance.
The learning rate used to update the model weights during training.
The LoRA (Low-Rank Adaptation) hyperparameters used to configure the job. LoRA provides an efficient way to adapt pre-trained models by introducing low-rank parameter updates.
Show child attributes
The scaling factor for the LoRA updates. This controls the strength of the adaptation.
x >= 1The dropout probability used in LoRA layers.
0 <= x <= 0.5Whether to use LoRA for fine-tuning.
The rank of the LoRA matrices. This determines the dimensionality of the low-rank updates.
x >= 1The names of the modules within the model that LoRA should be applied to. Only the specified modules will be fine-tuned.
Whether to mask the prompt labels during training.
The number of epochs to train for.
x >= 1The number of evaluations should run within the total number of epochs and is used to compute the evaluation percentage (n_epochs / n_evals).
x >= 0The proportion of total training epochs to use for warm-up, used to compute the number of warm-up epochs (n_epochs * warmup_ratio).
0 <= x <= 5The weight decay used to prevent overfitting.
Successful response.
If true, the estimated usage of the job will be returned, but the job will not be created.
The is an object representing a fine-tuning job.
Show child attributes
The name of the base model used for fine-tuning. This field is deprecated and will be removed in a future version. Please use base_model_name instead.
The ID of the base model used for fine-tuning.
The name of the base model used for fine-tuning.
Time at which the job completed.
Time at which the object was created.
This is an object representing the dataset used for a job.
The hyperparameters used to configure the job.
Show child attributes
The number of examples used in one iteration of training.
Whether to save the best model weights during training based on validation performance.
The learning rate used to update the model weights during training.
The LoRA (Low-Rank Adaptation) hyperparameters used to configure the job.
Show child attributes
The scaling factor for the LoRA updates.
The dropout probability used in LoRA layers.
Whether LoRA is enabled for fine-tuning.
The rank of the LoRA matrices.
The names of the modules within the model that LoRA should be applied to.
Whether to mask the prompt labels during training.
The number of epochs to train for.
The number of evaluations should run within the total number of epochs and is used to compute the evaluation percentage (n_epochs / n_evals).
The proportion of total training epochs to use for warm-up, used to compute the number of warm-up epochs (n_epochs * warmup_ratio).
The weight decay used to prevent overfitting.
Unique identifier for the object.
The name of the job.
Time at which the job started.
The status of the job.
queued, starting, running, completed, failed, cancelled