ChatGPT解决这个技术问题 Extra ChatGPT

How to restore folders (or entire buckets) to Amazon S3 from Glacier?

I changed the lifecycle for a bunch of my buckets on Amazon S3 so their storage class was set to Glacier. I did this using the online AWS Console. I now need those files again.

I know how to restore them back to S3 per file. But my buckets have thousands of files. I wanted to see if there was a way to restore the entire bucket back to S3, just like there was a way to send the entire bucket to Glacier?

I'm guessing there's a way to program a solution. But I wanted to see if there was a way to do it in the Console. Or with another program? Or something else I might be missing?


N
Nate Fox

If you use s3cmd you can use it to restore recursively pretty easily:

s3cmd restore --recursive s3://mybucketname/ 

I've also used it to restore just folders as well:

s3cmd restore --recursive s3://mybucketname/folder/

For MacOS X users simply download s3cmd, unzip, and run "sudo python setup.py install". To include your IAM (AWS) key in the command, run ... s3cmd restore --recursive --access_key={your access key here} --secret_key={your secret key here} s3://ms4pro/
what version of s3cmd does have restore option?
-D NUM, --restore-days=NUM Number of days to keep restored file available (only for 'restore' command).
You can also specify archive retrieval options (expedited, standard, bulk) to the 'restore' command by adding --restore-priority=bulk, as per described here.
sample : s3cmd restore --recursive s3:///folder/ --restore-days=10 --restore-priority=standard
T
Tomasz

If you're using the AWS CLI tool (it's nice, you should), you can do it like this:

aws s3 ls s3://<BUCKET_NAME> --recursive | awk '{print $4}' | xargs -L 1 aws s3api restore-object --restore-request '{"Days":<DAYS>,"GlacierJobParameters":{"Tier":"<TIER>"}}' --bucket <BUCKET_NAME> --key

Replace <BUCKET_NAME> with the bucket name you want and provide restore parameters <DAYS> and <TIER>.

<DAYS> is the number of days you want to restore the object for and <TIER> controls the speed of the restore process and has three levels: Bulk, Standard, or Expedited:


Thanks for this answer. I will add that this solution only works if the keys do not have spaces in them! In order to handle spaces you would need to replace your awk command with awk '{print substr($0, index($0, $4))}' Thanks to stackoverflow.com/questions/13446255/…
AND you need to use xargs -I %%% -L 1 aws s3api restore-object --restore-request Days= --bucket --key "%%%" so that you quote the string containing the spaces as part of the restore command.
@tomstratton Flag -L 1 excludes the usage of -I %%%. It should be removed. Unrelated: -t flag may come in handy to track the progress.
So, the final command that works with spaces well is: aws s3 ls s3://<BUCKET_NAME> --recursive | awk '{print substr($0, index($0, $4))}' | xargs -I %%% aws s3api restore-object --restore-request '{"Days":<DAYS>,"GlacierJobParameters":{"Tier":"<TIER>"}}' --bucket <BUCKET_NAME> --key "%%%"
D
David

The above answers didn't work well for me because my bucket was mixed with objects on Glacier and some that were not. The easiest thing for me was to create a list of all GLACIER objects in the bucket, then attempt to restore each one individually, ignoring any errors (like already in progress, not an object, etc).

Get a listing of all GLACIER files (keys) in the bucket aws s3api list-objects-v2 --bucket --query "Contents[?StorageClass=='GLACIER']" --output text | awk '{print $2}' > glacier-restore.txt Create a shell script and run it, replacing your "bucketName". #!/bin/sh for x in `cat glacier-restore.txt` do echo "Begin restoring $x" aws s3api restore-object --restore-request Days=7 --bucket --key "$x" echo "Done restoring $x" done

Credit goes to Josh at http://capnjosh.com/blog/a-client-error-invalidobjectstate-occurred-when-calling-the-copyobject-operation-operation-is-not-valid-for-the-source-objects-storage-class/, a resource I found after trying some of the above solutions.


Try awk 'BEGIN {FS="\t"}; {print $2}' instead to deal with files with spaces in them
This is the best answer IMHO, it's also good to review the (potentially huge) list of objects before doing an expensive operation
Need to use DEEP_ARCHIVE instead of GLACIER to catch files in deep glacier.
To specify the restore priority, you can swap out the aws s3api call with s3cmd: s3cmd restore --restore-days=7 --restore-priority=bulk "s3://$bucket_name/$x"
@taltman, this works just fine without s3cmd: aws s3api restore-object --restore-request '{"Days":7,"GlacierJobParameters":{"Tier":"Bulk"}}' --bucket mfx-prod --key "$x"
M
Michael - sqlbot

There isn't a built-in tool for this. "Folders" in S3 are an illusion for human convenience, based on forward-slashes in the object key (path/filename) and every object that migrates to glacier has to be restored individually, although...

Of course you could write a script to iterate through the hierarchy and send those restore requests using the SDKs or the REST API in your programming language of choice.

Be sure you understand how restoring from glacier into S3 works, before you proceed. It is always only a temporary restoration, and you choose the number of days that each object will persist in S3 before reverting back to being only stored in glacier.

Also, you want to be certain that you understand the penalty charges for restoring too much glacier data in a short period of time, or you could be in for some unexpected expense. Depending on the urgency, you may want to spread the restore operation out over days or weeks.


Thank you for the comment to pay attention to cost - almost made a drastic mistake there.
While this approach works, if you have a directory structure with hundreds of thousands of files (archives) it could take days to send all those REST API requests.
@zyamys the operation can be optimized by using parallel processes, threads, or multiple concurrent requests in a non-blocking environment... and, of course, running the code in EC2 in the same region will minimize the round trip time compared to running it externally. S3 should easily handle 100 req/sec, and more of you process the keys not in lexical order, since that reduces the chance of hitting index hot spots.
AWS have revised Glacier restoration charges; now it's a simple per-Gigabyte restore cost (with three tiers based on urgency or lack thereof).
S
SR.

I recently needed to restore a whole bucket and all its files and folders. You will need s3cmd and aws cli tools configured with your credentials to run this.

I've found this pretty robust to handle errors with specific objects in the bucket that might have already had a restore request.

#!/bin/sh

# This will give you a nice list of all objects in the bucket with the bucket name stripped out
s3cmd ls -r s3://<your-bucket-name> | awk '{print $4}' | sed 's#s3://<your-bucket-name>/##' > glacier-restore.txt

for x in `cat glacier-restore.txt`
do
    echo "restoring $x"
    aws s3api restore-object --restore-request Days=7 --bucket <your-bucket-name> --profile <your-aws-credentials-profile> --key "$x"
done

a
alexandernst

Here is my version of the aws cli interface and how to restore data from glacier. I modified some of the above examples to work when the key of the files to be restored contain spaces.

# Parameters
BUCKET="my-bucket" # the bucket you want to restore, no s3:// no slashes
BPATH="path/in/bucket/" # the objects prefix you wish to restore (mind the `/`) 
DAYS=1 # For how many days you wish to restore the data.

# Restore the objects
aws s3 ls s3://${BUCKET}/${BPATH} --recursive | \
awk '{out=""; for(i=4;i<=NF;i++){out=out" "$i}; print out}'| \
xargs -I {} aws s3api restore-object --restore-request Days=${DAYS} \
--bucket ${BUCKET} --key "{}"

T
TylerW

It looks like S3 Browser can "restore from Glacier" at the folder level, but not bucket level. The only thing is you have to buy the Pro version. So not the best solution.


The free and portable version can also initiate a restore from a folder. It then queues tasks to restore each individual file.
d
dannyman

A variation on Dustin's answer to use AWS CLI, but to use recursion and pipe to sh to skip errors (like if some objects have already requested restore...)

BUCKET=my-bucket
BPATH=/path/in/bucket
DAYS=1
aws s3 ls s3://$BUCKET$BPATH --recursive | awk '{print $4}' | xargs -L 1 \
 echo aws s3api restore-object --restore-request Days=$DAYS \
 --bucket $BUCKET --key | sh

The xargs echo bit generates a list of "aws s3api restore-object" commands and by piping that to sh, you can continue on error.

NOTE: Ubuntu 14.04 aws-cli package is old. In order to use --recursive you'll need to install via github.

POSTSCRIPT: Glacier restores can get unexpectedly pricey really quickly. Depending on your use case, you may find the Infrequent Access tier to be more appropriate. AWS have a nice explanation of the different tiers.


With the new pricing tiers, you can use the bulk retrieval method to keep costs under control: aws.amazon.com/glacier/pricing
Hey @AnaTodor, could you give an example retrieving a full folder in bulk mode with aws cli? Thanks a lot! :)
@marcostvz any of the solutions above work. But beside the Days parameter you also need to specify GlacierJobParameters={Tier="Bulk"}. See the shorthand syntax here : docs.aws.amazon.com/cli/latest/reference/s3api/…
Nice @AnaTodor, and should I request the bulk tier file by file or can I provide a list of files or even a folder to restore? My main goal with this is to avoid making many requests and try to be only billed once. :)
@marcostvz Unfortunately, requests are made only per object / file. If you want to restore a whole bucket you have to recursively traverse the bucket and issue a request for each, just like specified above. To save cost more, you are advised to merge/zip files before glaciering. For example, bulk restoring 30 TB of data costs around 75 USD with the new prices. But if those TB are from 60 million files, you will pay 1500 USD on top for the requests.
J
Jason Leach

This command worked for me:

aws s3api list-objects-v2 \
--bucket BUCKET_NAME \
--query "Contents[?StorageClass=='GLACIER']" \
--output text | \
awk -F $'\t' '{print $2}' | \
tr '\n' '\0' | \
xargs -L 1 -0 \
aws s3api restore-object \
--restore-request Days=7 \
--bucket BUCKET_NAME \
--key

ProTip

This command can take quite while if you have lots of objects.

Don't CTRL-C / break the command otherwise you'll have to wait for the processed objects to move out of the RestoreAlreadyInProgress state before you can re-run it. It can take a few hours for the state to transition. You'll see this error message if you need to wait: An error occurred (RestoreAlreadyInProgress) when calling the RestoreObject operation


b
bownie

I've been through this mill today and came up with the following based on the answers above and having also tried s3cmd. s3cmd doesn't work for mixed buckets (Glacier and Standard). This will do what you need in two steps - first create a glacier file list and then ping the s3 cli requests off (even if they have already occurred). It will also keep a track of which have been requested already so you can restart the script as necessary. Watch out for the TAB (\t) in the cut command quoted below:

#/bin/sh

bucket="$1"
glacier_file_list="glacier-restore-me-please.txt"
glacier_file_done="glacier-requested-restore-already.txt"

if [ "X${bucket}" = "X" ]
then
  echo "Please supply bucket name as first argument"
  exit 1
fi

aws s3api list-objects-v2 --bucket ${bucket} --query "Contents[?StorageClass=='GLACIER']" --output text |cut -d '\t' -f 2 > ${glacier_file_list}

if $? -ne 0
then
  echo "Failed to fetch list of objects from bucket ${bucket}"
  exit 1
fi

echo "Got list of glacier files from bucket ${bucket}"

while read x
do
  echo "Begin restoring $x"
  aws s3api restore-object --restore-request Days=7 --bucket ${bucket} --key "$x"

  if [ $? -ne 0 ]
  then
    echo "Failed to restore \"$x\""
  else
    echo "Done requested restore of \"$x\""
  fi

  # Log those done
  #
  echo "$x" >> ${glacier_file_done}

done < ${glacier_file_list}

K
Kyle Bridenstine

I wrote a program in python to recursively restore folders. The s3cmd command above didn't work for me and neither did the awk command.

You can run this like python3 /home/ec2-user/recursive_restore.py -- restore and to monitor the restore status use python3 /home/ec2-user/recursive_restore.py -- status

import argparse
import base64
import json
import os
import sys
from datetime import datetime
from pathlib import Path

import boto3
import pymysql.cursors
import yaml
from botocore.exceptions import ClientError

__author__ = "kyle.bridenstine"


def reportStatuses(
    operation,
    type,
    successOperation,
    folders,
    restoreFinished,
    restoreInProgress,
    restoreNotRequestedYet,
    restoreStatusUnknown,
    skippedFolders,
):
    """
    reportStatuses gives a generic, aggregated report for all operations (Restore, Status, Download)
    """

    report = 'Status Report For "{}" Operation. Of the {} total {}, {} are finished being {}, {} have a restore in progress, {} have not been requested to be restored yet, {} reported an unknown restore status, and {} were asked to be skipped.'.format(
        operation,
        str(len(folders)),
        type,
        str(len(restoreFinished)),
        successOperation,
        str(len(restoreInProgress)),
        str(len(restoreNotRequestedYet)),
        str(len(restoreStatusUnknown)),
        str(len(skippedFolders)),
    )

    if (len(folders) - len(skippedFolders)) == len(restoreFinished):
        print(report)
        print("Success: All {} operations are complete".format(operation))
    else:
        if (len(folders) - len(skippedFolders)) == len(restoreNotRequestedYet):
            print(report)
            print("Attention: No {} operations have been requested".format(operation))
        else:
            print(report)
            print("Attention: Not all {} operations are complete yet".format(operation))


def status(foldersToRestore, restoreTTL):

    s3 = boto3.resource("s3")

    folders = []
    skippedFolders = []

    # Read the list of folders to process
    with open(foldersToRestore, "r") as f:

        for rawS3Path in f.read().splitlines():

            folders.append(rawS3Path)

            s3Bucket = "put-your-bucket-name-here"
            maxKeys = 1000
            # Remove the S3 Bucket Prefix to get just the S3 Path i.e., the S3 Objects prefix and key name
            s3Path = removeS3BucketPrefixFromPath(rawS3Path, s3Bucket)

            # Construct an S3 Paginator that returns pages of S3 Object Keys with the defined prefix
            client = boto3.client("s3")
            paginator = client.get_paginator("list_objects")
            operation_parameters = {"Bucket": s3Bucket, "Prefix": s3Path, "MaxKeys": maxKeys}
            page_iterator = paginator.paginate(**operation_parameters)

            pageCount = 0

            totalS3ObjectKeys = []
            totalS3ObjKeysRestoreFinished = []
            totalS3ObjKeysRestoreInProgress = []
            totalS3ObjKeysRestoreNotRequestedYet = []
            totalS3ObjKeysRestoreStatusUnknown = []

            # Iterate through the pages of S3 Object Keys
            for page in page_iterator:

                for s3Content in page["Contents"]:

                    s3ObjectKey = s3Content["Key"]

                    # Folders show up as Keys but they cannot be restored or downloaded so we just ignore them
                    if s3ObjectKey.endswith("/"):
                        continue

                    totalS3ObjectKeys.append(s3ObjectKey)

                    s3Object = s3.Object(s3Bucket, s3ObjectKey)

                    if s3Object.restore is None:
                        totalS3ObjKeysRestoreNotRequestedYet.append(s3ObjectKey)
                    elif "true" in s3Object.restore:
                        totalS3ObjKeysRestoreInProgress.append(s3ObjectKey)
                    elif "false" in s3Object.restore:
                        totalS3ObjKeysRestoreFinished.append(s3ObjectKey)
                    else:
                        totalS3ObjKeysRestoreStatusUnknown.append(s3ObjectKey)

                pageCount = pageCount + 1

            # Report the total statuses for the folders
            reportStatuses(
                "restore folder " + rawS3Path,
                "files",
                "restored",
                totalS3ObjectKeys,
                totalS3ObjKeysRestoreFinished,
                totalS3ObjKeysRestoreInProgress,
                totalS3ObjKeysRestoreNotRequestedYet,
                totalS3ObjKeysRestoreStatusUnknown,
                [],
            )


def removeS3BucketPrefixFromPath(path, bucket):
    """
    removeS3BucketPrefixFromPath removes "s3a://<bucket name>" or "s3://<bucket name>" from the Path
    """

    s3BucketPrefix1 = "s3a://" + bucket + "/"
    s3BucketPrefix2 = "s3://" + bucket + "/"

    if path.startswith(s3BucketPrefix1):
        # remove one instance of prefix
        return path.replace(s3BucketPrefix1, "", 1)
    elif path.startswith(s3BucketPrefix2):
        # remove one instance of prefix
        return path.replace(s3BucketPrefix2, "", 1)
    else:
        return path


def restore(foldersToRestore, restoreTTL):
    """
    restore initiates a restore request on one or more folders
    """

    print("Restore Operation")

    s3 = boto3.resource("s3")
    bucket = s3.Bucket("put-your-bucket-name-here")

    folders = []
    skippedFolders = []

    # Read the list of folders to process
    with open(foldersToRestore, "r") as f:

        for rawS3Path in f.read().splitlines():

            folders.append(rawS3Path)

            # Skip folders that are commented out of the file
            if "#" in rawS3Path:
                print("Skipping this folder {} since it's commented out with #".format(rawS3Path))
                folders.append(rawS3Path)
                continue
            else:
                print("Restoring folder {}".format(rawS3Path))

            s3Bucket = "put-your-bucket-name-here"
            maxKeys = 1000
            # Remove the S3 Bucket Prefix to get just the S3 Path i.e., the S3 Objects prefix and key name
            s3Path = removeS3BucketPrefixFromPath(rawS3Path, s3Bucket)

            print("s3Bucket={}, s3Path={}, maxKeys={}".format(s3Bucket, s3Path, maxKeys))

            # Construct an S3 Paginator that returns pages of S3 Object Keys with the defined prefix
            client = boto3.client("s3")
            paginator = client.get_paginator("list_objects")
            operation_parameters = {"Bucket": s3Bucket, "Prefix": s3Path, "MaxKeys": maxKeys}
            page_iterator = paginator.paginate(**operation_parameters)

            pageCount = 0

            totalS3ObjectKeys = []
            totalS3ObjKeysRestoreFinished = []
            totalS3ObjKeysRestoreInProgress = []
            totalS3ObjKeysRestoreNotRequestedYet = []
            totalS3ObjKeysRestoreStatusUnknown = []

            # Iterate through the pages of S3 Object Keys
            for page in page_iterator:

                print("Processing S3 Key Page {}".format(str(pageCount)))

                s3ObjectKeys = []
                s3ObjKeysRestoreFinished = []
                s3ObjKeysRestoreInProgress = []
                s3ObjKeysRestoreNotRequestedYet = []
                s3ObjKeysRestoreStatusUnknown = []

                for s3Content in page["Contents"]:

                    print("Processing S3 Object Key {}".format(s3Content["Key"]))

                    s3ObjectKey = s3Content["Key"]

                    # Folders show up as Keys but they cannot be restored or downloaded so we just ignore them
                    if s3ObjectKey.endswith("/"):
                        print("Skipping this S3 Object Key because it's a folder {}".format(s3ObjectKey))
                        continue

                    s3ObjectKeys.append(s3ObjectKey)
                    totalS3ObjectKeys.append(s3ObjectKey)

                    s3Object = s3.Object(s3Bucket, s3ObjectKey)

                    print("{} - {} - {}".format(s3Object.key, s3Object.storage_class, s3Object.restore))

                    # Ensure this folder was not already processed for a restore
                    if s3Object.restore is None:

                        restore_response = bucket.meta.client.restore_object(
                            Bucket=s3Object.bucket_name, Key=s3Object.key, RestoreRequest={"Days": restoreTTL}
                        )

                        print("Restore Response: {}".format(str(restore_response)))

                        # Refresh object and check that the restore request was successfully processed
                        s3Object = s3.Object(s3Bucket, s3ObjectKey)

                        print("{} - {} - {}".format(s3Object.key, s3Object.storage_class, s3Object.restore))

                        if s3Object.restore is None:
                            s3ObjKeysRestoreNotRequestedYet.append(s3ObjectKey)
                            totalS3ObjKeysRestoreNotRequestedYet.append(s3ObjectKey)
                            print("%s restore request failed" % s3Object.key)
                            # Instead of failing the entire job continue restoring the rest of the log tree(s)
                            # raise Exception("%s restore request failed" % s3Object.key)
                        elif "true" in s3Object.restore:
                            print(
                                "The request to restore this file has been successfully received and is being processed: {}".format(
                                    s3Object.key
                                )
                            )
                            s3ObjKeysRestoreInProgress.append(s3ObjectKey)
                            totalS3ObjKeysRestoreInProgress.append(s3ObjectKey)
                        elif "false" in s3Object.restore:
                            print("This file has successfully been restored: {}".format(s3Object.key))
                            s3ObjKeysRestoreFinished.append(s3ObjectKey)
                            totalS3ObjKeysRestoreFinished.append(s3ObjectKey)
                        else:
                            print(
                                "Unknown restore status ({}) for file: {}".format(s3Object.restore, s3Object.key)
                            )
                            s3ObjKeysRestoreStatusUnknown.append(s3ObjectKey)
                            totalS3ObjKeysRestoreStatusUnknown.append(s3ObjectKey)

                    elif "true" in s3Object.restore:
                        print("Restore request already received for {}".format(s3Object.key))
                        s3ObjKeysRestoreInProgress.append(s3ObjectKey)
                        totalS3ObjKeysRestoreInProgress.append(s3ObjectKey)
                    elif "false" in s3Object.restore:
                        print("This file has successfully been restored: {}".format(s3Object.key))
                        s3ObjKeysRestoreFinished.append(s3ObjectKey)
                        totalS3ObjKeysRestoreFinished.append(s3ObjectKey)
                    else:
                        print(
                            "Unknown restore status ({}) for file: {}".format(s3Object.restore, s3Object.key)
                        )
                        s3ObjKeysRestoreStatusUnknown.append(s3ObjectKey)
                        totalS3ObjKeysRestoreStatusUnknown.append(s3ObjectKey)

                # Report the statuses per S3 Key Page
                reportStatuses(
                    "folder-" + rawS3Path + "-page-" + str(pageCount),
                    "files in this page",
                    "restored",
                    s3ObjectKeys,
                    s3ObjKeysRestoreFinished,
                    s3ObjKeysRestoreInProgress,
                    s3ObjKeysRestoreNotRequestedYet,
                    s3ObjKeysRestoreStatusUnknown,
                    [],
                )

                pageCount = pageCount + 1

            if pageCount > 1:
                # Report the total statuses for the files
                reportStatuses(
                    "restore-folder-" + rawS3Path,
                    "files",
                    "restored",
                    totalS3ObjectKeys,
                    totalS3ObjKeysRestoreFinished,
                    totalS3ObjKeysRestoreInProgress,
                    totalS3ObjKeysRestoreNotRequestedYet,
                    totalS3ObjKeysRestoreStatusUnknown,
                    [],
                )


def displayError(operation, exc):
    """
    displayError displays a generic error message for all failed operation's returned exceptions
    """

    print(
        'Error! Restore{} failed. Please ensure that you ran the following command "./tools/infra auth refresh" before executing this program. Error: {}'.format(
            operation, exc
        )
    )


def main(operation, foldersToRestore, restoreTTL):
    """
    main The starting point of the code that directs the operation to it's appropriate workflow
    """

    print(
        "{} Starting log_migration_restore.py with operation={} foldersToRestore={} restoreTTL={} Day(s)".format(
            str(datetime.now().strftime("%d/%m/%Y %H:%M:%S")), operation, foldersToRestore, str(restoreTTL)
        )
    )

    if operation == "restore":
        try:
            restore(foldersToRestore, restoreTTL)
        except Exception as exc:
            displayError("", exc)
    elif operation == "status":
        try:
            status(foldersToRestore, restoreTTL)
        except Exception as exc:
            displayError("-Status-Check", exc)
    else:
        raise Exception("%s is an invalid operation. Please choose either 'restore' or 'status'" % operation)


def check_operation(operation):
    """
    check_operation validates the runtime input arguments
    """

    if operation is None or (
        str(operation) != "restore" and str(operation) != "status" and str(operation) != "download"
    ):
        raise argparse.ArgumentTypeError(
            "%s is an invalid operation. Please choose either 'restore' or 'status' or 'download'" % operation
        )
    return str(operation)


# To run use sudo python3 /home/ec2-user/recursive_restore.py -- restore
# -l /home/ec2-user/folders_to_restore.csv
if __name__ == "__main__":

    # Form the argument parser.
    parser = argparse.ArgumentParser(
        description="Restore s3 folders from archival using 'restore' or check on the restore status using 'status'"
    )

    parser.add_argument(
        "operation",
        type=check_operation,
        help="Please choose either 'restore' to restore the list of s3 folders or 'status' to see the status of a restore on the list of s3 folders",
    )

    parser.add_argument(
        "-l",
        "--foldersToRestore",
        type=str,
        default="/home/ec2-user/folders_to_restore.csv",
        required=False,
        help="The location of the file containing the list of folders to restore. Put one folder on each line.",
    )

    parser.add_argument(
        "-t",
        "--restoreTTL",
        type=int,
        default=30,
        required=False,
        help="The number of days you want the filess to remain restored/unarchived. After this period the logs will automatically be rearchived.",
    )

    args = parser.parse_args()
    sys.exit(main(args.operation, args.foldersToRestore, args.restoreTTL))


关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now