Node.js and AWS S3: A Step-by-Step Guide to Uploading Images to AWS S3 with Node.js




Introduction:

Explain the importance of cloud storage and its advantages over traditional storage methods.

Introduce AWS S3 (Simple Storage Service) as a scalable and secure cloud storage solution provided by Amazon Web Services (AWS).

Highlight the benefits of using Node.js, a popular JavaScript runtime, for server-side development and its integration with AWS services.

Setting up the Environment:

The AWS SDK for JavaScript v3 is the latest and recommended version, which has been GA since December 2020. Here is why and how you should use AWS SDK for JavaScript v3. You can try our experimental migration scripts in aws-sdk-js-codemod to migrate your application from v2 to v3..

first install the aws-sdk in your project


npm install aws-sdk


.env 
AWS_ACCESS_KEY_ID =
AWS_SECRET_ACCESS_KEY = 
AWS_REGION =
AWS_S3_BUCKET_OPEN =


The provided content appears to be a list of credentials and configuration parameters typically used for accessing AWS (Amazon Web Services) resources, particularly for interacting with the S3 (Simple Storage Service) bucket. Here's a summary breakdown: 

 AWS_ACCESS_KEY_ID: This is the access key ID associated with your AWS account, which is used to authenticate and authorize access to AWS services. 

 AWS_SECRET_ACCESS_KEY: This is the secret access key paired with the access key ID. It serves as the password for accessing AWS resources and should be kept confidential. 

 AWS_REGION: This parameter specifies the AWS region where your resources are located. Each AWS region represents a separate geographic area and may have different availability zones and services. 

 AWS_S3_BUCKET_OPEN: This appears to be referencing an S3 bucket name, which is a globally unique identifier for a storage container in AWS S3. "OPEN" might suggest that the bucket


Create a file  aswS3.js



const AWS = require("aws-sdk");
const s3 = new AWS.S3();

AWS.config.setPromisesDependency(require("bluebird"));
AWS.config.update({
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  region: process.env.AWS_REGION,
});

exports.awsS3BucketUpload = async (params) => {
  try {
    let r = (Math.random() + 1).toString(36).substring(7);
    const s3KeyId =
      "storage/" + params.folder + "/" + r + params.file.originalname;
    const paramsData = {
      Bucket: process.env.AWS_S3_BUCKET_OPEN,
      Key: `${s3KeyId}`,
      Body: params.file.buffer,
     // ACL: "public-read",
      ContentType: params.file.mimetype,
    };
    let upload = await s3.upload(paramsData).promise();
    let { Location: location, Key: key } = upload;
    if (key.includes("/")) {
      key = key.split("/")[1];
    }
    return { status: 200, location, key, data: upload };
  } catch (err) {
    return { status: 400, message: err.message };
  }
};


The code imports the AWS SDK and creates a new S3 object instance. It configures AWS SDK with the necessary credentials (access key ID, secret access key) and region using environment variables. The awsS3BucketUpload function is exported, which takes params as input. Within the function, a random string is generated to create a unique key for the file being uploaded. The paramsData object is constructed, specifying the bucket name, key, file data (buffer), content type, etc. The upload operation is performed asynchronously using s3.upload() method, and the result is awaited using promises. Upon successful upload, the location and key of the uploaded file are extracted from the response and returned along with the HTTP status code 200. If an error occurs during the upload process, an error message is caught, and an object with a status code of 400 and the error message is returned.


after create this file then you call this function below 


  const upload = await awsS3BucketUpload({
        folder: "collections",
        file: params.image,
        resource_type: "image",
      });

console.log(upload)



awsS3BucketUpload is a function previously defined in the provided code snippet. It handles the upload of files to an AWS S3 bucket. It takes an object params as an argument, which should include properties such as folder, file, and resource_type.

In this specific call:

The folder property is set to "collections". This likely indicates the destination folder within the S3 bucket where the file will be stored.

The file property is set to params.image. This suggests that the file to be uploaded is contained within the params object, possibly received from an external source.

The resource_type property is set to "image". This could be metadata indicating the type or category of the resource being uploaded.

The await keyword is used before the function call, indicating that the execution should wait for the awsS3BucketUpload function to complete before proceeding. This implies that awsS3BucketUpload returns a promise.

Upon completion of the upload operation, the result is stored in the upload variable.

Finally, the upload object is logged to the console using console.log(upload), allowing you to inspect the result of the upload operation. This object likely contains information about the uploaded file, such as its location, key, and any additional data returned by the awsS3BucketUpload function.

In summary, this code snippet initiates the upload of a file to an AWS S3 bucket by calling the awsS3BucketUpload function with specific parameters, awaits its completion, and logs the result to the console for further inspection.
Previous Post Next Post

Contact Form