{
"version":"2.0",
"metadata":{
"uid":"machinelearning-2014-12-12",
"apiVersion":"2014-12-12",
"endpointPrefix":"machinelearning",
"jsonVersion":"1.1",
"serviceFullName":"Amazon Machine Learning",
"signatureVersion":"v4",
"targetPrefix":"AmazonML_20141212",
"protocol":"json"
},
"documentation":"Definition of the public APIs exposed by Amazon Machine Learning",
"operations":{
"AddTags":{
"name":"AddTags",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"AddTagsInput"},
"output":{
"shape":"AddTagsOutput",
"documentation":"<p>Amazon ML returns the following elements. </p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InvalidTagException",
"exception":true
},
{
"shape":"TagLimitExceededException",
"exception":true
},
{
"shape":"ResourceNotFoundException",
"error":{"httpStatusCode":404},
"exception":true,
"documentation":"<p>A specified resource cannot be located.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
}
],
"documentation":"<p>Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you add a tag using a key that is already associated with the ML object, <code>AddTags</code> updates the tag's value.</p>"
},
"CreateBatchPrediction":{
"name":"CreateBatchPrediction",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateBatchPredictionInput"},
"output":{
"shape":"CreateBatchPredictionOutput",
"documentation":"<p> Represents the output of a <code>CreateBatchPrediction</code> operation, and is an acknowledgement that Amazon ML received the request.</p> <p>The <code>CreateBatchPrediction</code> operation is asynchronous. You can poll for status updates by using the <code>>GetBatchPrediction</code> operation and checking the <code>Status</code> parameter of the result. </p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
},
{
"shape":"IdempotentParameterMismatchException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.</p>"
}
],
"documentation":"<p>Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a <code>DataSource</code>. This operation creates a new <code>BatchPrediction</code>, and uses an <code>MLModel</code> and the data files referenced by the <code>DataSource</code> as information sources. </p> <p><code>CreateBatchPrediction</code> is an asynchronous operation. In response to <code>CreateBatchPrediction</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>BatchPrediction</code> status to <code>PENDING</code>. After the <code>BatchPrediction</code> completes, Amazon ML sets the status to <code>COMPLETED</code>. </p> <p>You can poll for status updates by using the <a>GetBatchPrediction</a> operation and checking the <code>Status</code> parameter of the result. After the <code>COMPLETED</code> status appears, the results are available in the location specified by the <code>OutputUri</code> parameter.</p>"
},
"CreateDataSourceFromRDS":{
"name":"CreateDataSourceFromRDS",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateDataSourceFromRDSInput"},
"output":{
"shape":"CreateDataSourceFromRDSOutput",
"documentation":"<p> Represents the output of a <code>CreateDataSourceFromRDS</code> operation, and is an acknowledgement that Amazon ML received the request.</p> <p>The <code>CreateDataSourceFromRDS</code>> operation is asynchronous. You can poll for updates by using the <code>GetBatchPrediction</code> operation and checking the <code>Status</code> parameter. You can inspect the <code>Message</code> when <code>Status</code> shows up as <code>FAILED</code>. You can also check the progress of the copy operation by going to the <code>DataPipeline</code> console and looking up the pipeline using the <code>pipelineId </code> from the describe call.</p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
},
{
"shape":"IdempotentParameterMismatchException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.</p>"
}
],
"documentation":"<p>Creates a <code>DataSource</code> object from an <a href=\"http://aws.amazon.com/rds/\"> Amazon Relational Database Service</a> (Amazon RDS). A <code>DataSource</code> references data that can be used to perform <code>CreateMLModel</code>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations.</p> <p><code>CreateDataSourceFromRDS</code> is an asynchronous operation. In response to <code>CreateDataSourceFromRDS</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>DataSource</code> status to <code>PENDING</code>. After the <code>DataSource</code> is created and ready for use, Amazon ML sets the <code>Status</code> parameter to <code>COMPLETED</code>. <code>DataSource</code> in the <code>COMPLETED</code> or <code>PENDING</code> state can be used only to perform <code>>CreateMLModel</code>>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations. </p> <p> If Amazon ML cannot accept the input source, it sets the <code>Status</code> parameter to <code>FAILED</code> and includes an error message in the <code>Message</code> attribute of the <code>GetDataSource</code> operation response. </p>"
},
"CreateDataSourceFromRedshift":{
"name":"CreateDataSourceFromRedshift",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateDataSourceFromRedshiftInput"},
"output":{
"shape":"CreateDataSourceFromRedshiftOutput",
"documentation":"<p> Represents the output of a <code>CreateDataSourceFromRedshift</code> operation, and is an acknowledgement that Amazon ML received the request.</p> <p>The <code>CreateDataSourceFromRedshift</code> operation is asynchronous. You can poll for updates by using the <code>GetBatchPrediction</code> operation and checking the <code>Status</code> parameter. </p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
},
{
"shape":"IdempotentParameterMismatchException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.</p>"
}
],
"documentation":"<p>Creates a <code>DataSource</code> from a database hosted on an Amazon Redshift cluster. A <code>DataSource</code> references data that can be used to perform either <code>CreateMLModel</code>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations.</p> <p><code>CreateDataSourceFromRedshift</code> is an asynchronous operation. In response to <code>CreateDataSourceFromRedshift</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>DataSource</code> status to <code>PENDING</code>. After the <code>DataSource</code> is created and ready for use, Amazon ML sets the <code>Status</code> parameter to <code>COMPLETED</code>. <code>DataSource</code> in <code>COMPLETED</code> or <code>PENDING</code> states can be used to perform only <code>CreateMLModel</code>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations. </p> <p> If Amazon ML can't accept the input source, it sets the <code>Status</code> parameter to <code>FAILED</code> and includes an error message in the <code>Message</code> attribute of the <code>GetDataSource</code> operation response. </p> <p>The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified by a <code>SelectSqlQuery</code> query. Amazon ML executes an <code>Unload</code> command in Amazon Redshift to transfer the result set of the <code>SelectSqlQuery</code> query to <code>S3StagingLocation</code>.</p> <p>After the <code>DataSource</code> has been created, it's ready for use in evaluations and batch predictions. If you plan to use the <code>DataSource</code> to train an <code>MLModel</code>, the <code>DataSource</code> also requires a recipe. A recipe describes how each input variable will be used in training an <code>MLModel</code>. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.</p> <?oxy_insert_start author=\"laurama\" timestamp=\"20160406T153842-0700\"><p>You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon Redshift datasource to create a new datasource. To do so, call <code>GetDataSource</code> for an existing datasource and copy the values to a <code>CreateDataSource</code> call. Change the settings that you want to change and make sure that all required fields have the appropriate values.</p> <?oxy_insert_end>"
},
"CreateDataSourceFromS3":{
"name":"CreateDataSourceFromS3",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateDataSourceFromS3Input"},
"output":{
"shape":"CreateDataSourceFromS3Output",
"documentation":"<p> Represents the output of a <code>CreateDataSourceFromS3</code> operation, and is an acknowledgement that Amazon ML received the request.</p> <p>The <code>CreateDataSourceFromS3</code> operation is asynchronous. You can poll for updates by using the <code>GetBatchPrediction</code> operation and checking the <code>Status</code> parameter. </p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
},
{
"shape":"IdempotentParameterMismatchException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.</p>"
}
],
"documentation":"<p>Creates a <code>DataSource</code> object. A <code>DataSource</code> references data that can be used to perform <code>CreateMLModel</code>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations.</p> <p><code>CreateDataSourceFromS3</code> is an asynchronous operation. In response to <code>CreateDataSourceFromS3</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>DataSource</code> status to <code>PENDING</code>. After the <code>DataSource</code> has been created and is ready for use, Amazon ML sets the <code>Status</code> parameter to <code>COMPLETED</code>. <code>DataSource</code> in the <code>COMPLETED</code> or <code>PENDING</code> state can be used to perform only <code>CreateMLModel</code>, <code>CreateEvaluation</code> or <code>CreateBatchPrediction</code> operations. </p> <p> If Amazon ML can't accept the input source, it sets the <code>Status</code> parameter to <code>FAILED</code> and includes an error message in the <code>Message</code> attribute of the <code>GetDataSource</code> operation response. </p> <p>The observation data used in a <code>DataSource</code> should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the <code>DataSource</code>. </p> <p>After the <code>DataSource</code> has been created, it's ready to use in evaluations and batch predictions. If you plan to use the <code>DataSource</code> to train an <code>MLModel</code>, the <code>DataSource</code> also needs a recipe. A recipe describes how each input variable will be used in training an <code>MLModel</code>. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.</p>"
},
"CreateEvaluation":{
"name":"CreateEvaluation",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateEvaluationInput"},
"output":{
"shape":"CreateEvaluationOutput",
"documentation":"<p> Represents the output of a <code>CreateEvaluation</code> operation, and is an acknowledgement that Amazon ML received the request.</p> <p><code>CreateEvaluation</code> operation is asynchronous. You can poll for status updates by using the <code>GetEvcaluation</code> operation and checking the <code>Status</code> parameter. </p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
},
{
"shape":"IdempotentParameterMismatchException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.</p>"
}
],
"documentation":"<p>Creates a new <code>Evaluation</code> of an <code>MLModel</code>. An <code>MLModel</code> is evaluated on a set of observations associated to a <code>DataSource</code>. Like a <code>DataSource</code> for an <code>MLModel</code>, the <code>DataSource</code> for an <code>Evaluation</code> contains values for the <code>Target Variable</code>. The <code>Evaluation</code> compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the <code>MLModel</code> functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding <code>MLModelType</code>: <code>BINARY</code>, <code>REGRESSION</code> or <code>MULTICLASS</code>. </p> <p><code>CreateEvaluation</code> is an asynchronous operation. In response to <code>CreateEvaluation</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to <code>PENDING</code>. After the <code>Evaluation</code> is created and ready for use, Amazon ML sets the status to <code>COMPLETED</code>. </p> <p>You can use the <code>GetEvaluation</code> operation to check progress of the evaluation during the creation operation.</p>"
},
"CreateMLModel":{
"name":"CreateMLModel",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateMLModelInput"},
"output":{
"shape":"CreateMLModelOutput",
"documentation":"<p> Represents the output of a <code>CreateMLModel</code> operation, and is an acknowledgement that Amazon ML received the request.</p> <p>The <code>CreateMLModel</code> operation is asynchronous. You can poll for status updates by using the <code>GetMLModel</code> operation and checking the <code>Status</code> parameter. </p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
},
{
"shape":"IdempotentParameterMismatchException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.</p>"
}
],
"documentation":"<p>Creates a new <code>MLModel</code> using the <code>DataSource</code> and the recipe as information sources. </p> <p>An <code>MLModel</code> is nearly immutable. Users can update only the <code>MLModelName</code> and the <code>ScoreThreshold</code> in an <code>MLModel</code> without creating a new <code>MLModel</code>. </p> <p><code>CreateMLModel</code> is an asynchronous operation. In response to <code>CreateMLModel</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>MLModel</code> status to <code>PENDING</code>. After the <code>MLModel</code> has been created and ready is for use, Amazon ML sets the status to <code>COMPLETED</code>. </p> <p>You can use the <code>GetMLModel</code> operation to check the progress of the <code>MLModel</code> during the creation operation.</p> <p> <code>CreateMLModel</code> requires a <code>DataSource</code> with computed statistics, which can be created by setting <code>ComputeStatistics</code> to <code>true</code> in <code>CreateDataSourceFromRDS</code>, <code>CreateDataSourceFromS3</code>, or <code>CreateDataSourceFromRedshift</code> operations. </p>"
},
"CreateRealtimeEndpoint":{
"name":"CreateRealtimeEndpoint",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"CreateRealtimeEndpointInput"},
"output":{
"shape":"CreateRealtimeEndpointOutput",
"documentation":"<p>Represents the output of an <code>CreateRealtimeEndpoint</code> operation.</p> <p>The result contains the <code>MLModelId</code> and the endpoint information for the <code>MLModel</code>.</p> <note> <p>The endpoint information includes the URI of the <code>MLModel</code>; that is, the location to send online prediction requests for the specified <code>MLModel</code>.</p> </note>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"ResourceNotFoundException",
"error":{"httpStatusCode":404},
"exception":true,
"documentation":"<p>A specified resource cannot be located.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
}
],
"documentation":"<p>Creates a real-time endpoint for the <code>MLModel</code>. The endpoint contains the URI of the <code>MLModel</code>; that is, the location to send real-time prediction requests for the specified <code>MLModel</code>.</p>"
},
"DeleteBatchPrediction":{
"name":"DeleteBatchPrediction",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteBatchPredictionInput"},
"output":{
"shape":"DeleteBatchPredictionOutput",
"documentation":"<p> Represents the output of a <code>DeleteBatchPrediction</code> operation.</p> <p>You can use the <code>GetBatchPrediction</code> operation and check the value of the <code>Status</code> parameter to see whether a <code>BatchPrediction</code> is marked as <code>DELETED</code>.</p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
},
{
"shape":"ResourceNotFoundException",
"error":{"httpStatusCode":404},
"exception":true,
"documentation":"<p>A specified resource cannot be located.</p>"
},
{
"shape":"InternalServerException",
"error":{"httpStatusCode":500},
"exception":true,
"fault":true,
"documentation":"<p>An error on the server occurred when trying to process a request.</p>"
}
],
"documentation":"<p>Assigns the DELETED status to a <code>BatchPrediction</code>, rendering it unusable.</p> <p>After using the <code>DeleteBatchPrediction</code> operation, you can use the <a>GetBatchPrediction</a> operation to verify that the status of the <code>BatchPrediction</code> changed to DELETED.</p> <p><b>Caution:</b> The result of the <code>DeleteBatchPrediction</code> operation is irreversible.</p>"
},
"DeleteDataSource":{
"name":"DeleteDataSource",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"DeleteDataSourceInput"},
"output":{
"shape":"DeleteDataSourceOutput",
"documentation":"<p> Represents the output of a <code>DeleteDataSource</code> operation.</p>"
},
"errors":[
{
"shape":"InvalidInputException",
"error":{"httpStatusCode":400},
"exception":true,
"documentation":"<p>An error on the client occurred. Typically, the cause is an invalid input value.</p>"
Loading ...