class Aws::S3::Types::UploadPartCopyRequest
@see docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopyRequest AWS API Documentation
@return [String]
(access denied).
bucket, the request fails with the HTTP status code ‘403 Forbidden`
ID that you provide does not match the actual owner of the source
The account ID of the expected source bucket owner. If the account
@!attribute [rw] expected_source_bucket_owner
@return [String]
Forbidden` (access denied).
destination bucket, the request fails with the HTTP status code `403
account ID that you provide does not match the actual owner of the
The account ID of the expected destination bucket owner. If the
@!attribute [rw] expected_bucket_owner
@return [String]<br>: docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html<br><br><br><br></note>
<note markdown=“1”> This functionality is not supported for directory buckets.
Requester Pays Buckets] in the *Amazon S3 User Guide*.
objects from Requester Pays buckets, see [Downloading Objects in
charges to copy the object. For information about downloading
Requester Pays enabled, the requester will pay for corresponding
requests. If either the source or destination S3 bucket has
request. Bucket owners need not specify this parameter in their
Confirms that the requester knows that they will be charged for the
@!attribute [rw] request_payer
@return [String]
</note>
directory bucket.
<note markdown=“1”> This functionality is not supported when the source object is in a
to ensure that the encryption key was transmitted without error.
RFC 1321. Amazon S3 uses this header for a message integrity check
Specifies the 128-bit MD5 digest of the encryption key according to
@!attribute [rw] copy_source_sse_customer_key_md5
@return [String]
</note>
directory bucket.
<note markdown=“1”> This functionality is not supported when the source object is in a
header must be one that was used when the source object was created.
to decrypt the source object. The encryption key provided in this
Specifies the customer-provided encryption key for Amazon S3 to use
@!attribute [rw] copy_source_sse_customer_key
@return [String]
</note>
directory bucket.
<note markdown=“1”> This functionality is not supported when the source object is in a
(for example, `AES256`).
Specifies the algorithm to use when decrypting the source object
@!attribute [rw] copy_source_sse_customer_algorithm
@return [String]
</note>
directory bucket.
<note markdown=“1”> This functionality is not supported when the destination bucket is a
to ensure that the encryption key was transmitted without error.
RFC 1321. Amazon S3 uses this header for a message integrity check
Specifies the 128-bit MD5 digest of the encryption key according to
@!attribute [rw] sse_customer_key_md5
@return [String]
</note>
directory bucket.
<note markdown=“1”> This functionality is not supported when the destination bucket is a
upload request.
be the same encryption key specified in the initiate multipart
`x-amz-server-side-encryption-customer-algorithm` header. This must
key must be appropriate for use with the algorithm specified in the
it is discarded; Amazon S3 does not store the encryption key. The
in encrypting data. This value is used to store the object and then
Specifies the customer-provided encryption key for Amazon S3 to use
@!attribute [rw] sse_customer_key
@return [String]
</note>
directory bucket.
<note markdown=“1”> This functionality is not supported when the destination bucket is a
example, AES256).
Specifies the algorithm to use when encrypting the object (for
@!attribute [rw] sse_customer_algorithm
@return [String]
copied.
Upload ID identifying the multipart upload whose part is being
@!attribute [rw] upload_id
@return [Integer]
1 and 10,000.
Part number of part being copied. This is a positive integer between
@!attribute [rw] part_number
@return [String]
Object key for which the multipart upload was initiated.
@!attribute [rw] key
@return [String]
a range only if the source object is greater than 5 MB.
that you want to copy the first 10 bytes of the source. You can copy
zero-based byte offsets to copy. For example, bytes=0-9 indicates
must use the form bytes=first-last, where the first and last are the
The range of bytes to copy from the source object. The range value
@!attribute [rw] copy_source_range
@return [Time]
Amazon S3 returns `200 OK` and copies the data.
`false`;
`x-amz-copy-source-if-unmodified-since` condition evaluates to
`x-amz-copy-source-if-match` condition evaluates to `true`, and;
request as follows:
`x-amz-copy-source-if-unmodified-since` headers are present in the
If both of the `x-amz-copy-source-if-match` and
time.
Copies the object if it hasn’t been modified since the specified
@!attribute [rw] copy_source_if_unmodified_since
@return [String]
Amazon S3 returns ‘412 Precondition Failed` response code.
`x-amz-copy-source-if-modified-since` condition evaluates to `true`;
and;
`x-amz-copy-source-if-none-match` condition evaluates to `false`,
request as follows:
`x-amz-copy-source-if-modified-since` headers are present in the
If both of the `x-amz-copy-source-if-none-match` and
specified ETag.
Copies the object if its entity tag (ETag) is different than the
@!attribute [rw] copy_source_if_none_match
@return [Time]
Amazon S3 returns `412 Precondition Failed` response code.
`x-amz-copy-source-if-modified-since` condition evaluates to `true`;
and;
`x-amz-copy-source-if-none-match` condition evaluates to `false`,
request as follows:
`x-amz-copy-source-if-modified-since` headers are present in the
If both of the `x-amz-copy-source-if-none-match` and
Copies the object if it has been modified since the specified time.
@!attribute [rw] copy_source_if_modified_since
@return [String]
Amazon S3 returns `200 OK` and copies the data.
`false`;
`x-amz-copy-source-if-unmodified-since` condition evaluates to
`x-amz-copy-source-if-match` condition evaluates to `true`, and;
request as follows:
`x-amz-copy-source-if-unmodified-since` headers are present in the
If both of the `x-amz-copy-source-if-match` and
tag.
Copies the object if its entity tag (ETag) matches the specified
@!attribute [rw] copy_source_if_match
@return [String]<br>: docs.aws.amazon.com/AmazonS3/latest/userguide/access-points.html<br><br><br><br></note>
for directory buckets.
<note markdown=“1”> **Directory buckets** - S3 Versioning isn’t enabled and supported
marker as a version for the ‘x-amz-copy-source`.
Request` error, because you are not allowed to specify a delete
versionId is a delete marker, Amazon S3 returns an HTTP `400 Bad
If you specify versionId in the `x-amz-copy-source` and the
returns a `404 Not Found` error, because the object does not exist.
versionId in the `x-amz-copy-source` request header, Amazon S3
If the current version is a delete marker and you don’t specify a
/awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893`).
(for example, ‘x-amz-copy-source:
`?versionId=<version-id>` to the `x-amz-copy-source` request header
a specific version of the source object to copy, append
identifies the current version of the source object to copy. To copy
versions of the same object. By default, `x-amz-copy-source`
If your bucket has versioning enabled, you could have multiple
The value must be URL-encoded.
`arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf`.
`us-west-2`, use the URL encoding of
outpost `my-outpost` owned by account `123456789012` in Region
For example, to copy the object `reports/january.pdf` through
`arn:aws:s3-outposts:<Region>:<account-id>:outpost/<outpost-id>/object/<key>`.
specify the ARN of the object as accessed in the format
Alternatively, for objects accessed through Amazon S3 on Outposts,
</note>
* Access points are not supported by directory buckets.
Services Region.
the source and destination buckets are in the same Amazon Web
<note markdown=“1”> * Amazon S3 supports copy operations using Access points only when
The value must be URL encoded.
`arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf`.
Region `us-west-2`, use the URL encoding of
access point `my-access-point` owned by account `123456789012` in
For example, to copy the object `reports/january.pdf` through
`arn:aws:s3:<Region>:<account-id>:accesspoint/<access-point-name>/object/<key>`.
point, in the format
Resource Name (ARN) of the object as accessed through the access
* For objects accessed through access points, specify the Amazon
URL-encoded.
`awsexamplebucket/reports/january.pdf`. The value must be
from the bucket `awsexamplebucket`, use
slash (/). For example, to copy the object `reports/january.pdf`
of the source bucket and key of the source object, separated by a
* For objects not accessed through an access point, specify the name
the source object through an [access point]:
value in one of two formats, depending on whether you want to access
Specifies the source object for the copy operation. You specify the
@!attribute [rw] copy_source
@return [String]<br>: docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html<br>[2]: docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html<br>[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/directory-bucket-naming-rules.html<br><br><br><br>Outposts?][3] in the *Amazon S3 User Guide*.
more information about S3 on Outposts, see [What is S3 on
must be the Outposts access point ARN or the access point alias. For
When you use this action with S3 on Outposts, the destination bucket
AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com`.
Outposts hostname takes the form `
you must direct requests to the S3 on Outposts hostname. The S3 on
**S3 on Outposts** - When you use this action with S3 on Outposts,
</note>
directory buckets.
<note markdown=“1”> Access points and Object Lambda access points are not supported by
[Using access points] in the *Amazon S3 User Guide*.
bucket name. For more information about access point ARNs, see
Services SDKs, you provide the access point ARN in place of the
When using this action with an access point through the Amazon Web
AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
The access point hostname takes the form
point ARN, you must direct requests to the access point hostname.
bucket name or specify the access point ARN. When using the access
you must provide the alias of the access point in place of the
**Access points** - When you use this action with an access point,
</note>
HTTP `400 Bad Request` error with the error code `InvalidRequest`.
the same parent Amazon Web Services Region. Otherwise, you get an
Services Local Zones. The source and destination buckets must have
supported when the source or destination bucket is in Amazon Web
<note markdown=“1”> Copying objects across different Amazon Web Services Regions isn’t
*Amazon S3 User Guide*.
naming restrictions, see [Directory bucket naming rules] in the
amzn-s3-demo-bucket–usw2-az1–x-s3`). For information about bucket
follow the format ‘ bucket-base-name–zone-id–x-s3` (for example, `
the chosen Zone (Availability Zone or Local Zone). Bucket names must
requests are not supported. Directory bucket names must be unique in
Bucket-name.s3express-zone-id.region-code.amazonaws.com`. Path-style
bucket, you must use virtual-hosted-style requests in the format `
**Directory buckets** - When you use this operation with a directory
The bucket name.
@!attribute [rw] bucket