How to upload files to AWS S3 from the client-side using React.JS and Node.JS

Raz Levy
8 min readJun 22, 2022

--

Since I published my last tutorial on How to upload images to AWS S3 using React.JS and Node.JS express server many individuals kept on asking me if there’s an option to upload the image on the client-side and avoid sending the whole file object to the server-side.

The answer is YES!

In this tutorial, I’ll explain the difference between using both technics and I’ll explain how to upload a file from the client-side to AWS S3 using React.JS on the frontend and Node.JS express server on the backend, but this time, the upload process will take place in the frontend platform.

Prerequisites:

Client-side VS Server-side uploading differences:

So actually there are a few differences when choosing how to upload a file, following are some of them:

  1. Uploading a file from the client-side means that the action is done on the user’s device, while the server-side means that the action takes place on the webserver.
  2. When uploading a large file to AWS S3 it might be better to upload it from the client-side because sending it to the server-side and processing the file can be very heavy for the bandwidth among other considerations such as cost, while inbound traffic to the server is free, the outbound from the server will accrue charges, in addition, when thinking about performance, it will depend on type and size of the chosen server instance.
  3. On the other hand, when uploading a file from the client side there are other challenges such as security which means that when you provide the ability to upload a file from the client side you should do it by using signed URLs, and the performance will be limited to the performance of the user’s internet connection.
  4. When you choose to go with the client-side solution, you will have the ability to use a serverless function to generate a signed URL instead of having a live server that will have to handle the upload request, the main reason you won’t be able to go serverless if you choose to go with the server-side solution is that AWS API Gateway has a limitation of 5MB per request, which means it will be much harder to process a request with a file and that you’ll have to stream it.

Open CORs and permissions on S3 Bucket:

  1. Enter your S3 bucket and navigate to permissions on the top navigation bar.
  2. Scroll down until you see Bucket Policy, click on edit and enter the following permissions (replace <bucket_name> with your bucket name:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket_name>/*"
}
]
}

3. Scroll down to CORS, click on edit and add the following:

[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]

Generating a signed URL:

Alright, so once you chose to go with the client-side solution, all you will have to do on your backend side is to generate a signed URL using the aws-sdk npm package.

First, in case you haven’t downloaded yet my boilerplate for Node.JS you can do it by cloning the following repository.

Next, we have to install the aws-sdk npm library by entering the following in the terminal:

npm install --save aws-sdk

Define a service:

Next, open a new directory under the src folder and call it services, in this folder you can place all the services you’ll use in the application.
In the services directory, open a new file and call it filesService.js, this file will contain all the code you need to make a new signed URL.

In the file enter the following code:

const AWS = require('aws-sdk');
const BUCKET_NAME = process.env.BUCKET_NAME;
const s3 = new AWS.S3({signatureVersion: 'v4'});

/**
*
@description Generated a signed and a public urls for a given file name
*
@param filename File name including file extension
*
@param bucketPath Folder path in S3
*
@returns {publicUrl: string, signedUrl: string}
*/
const generateUrl = async (filename, bucketPath) => {
let signedUrl;
const publicUrl = getPublicUrl(filename, bucketPath);
const params = {
Bucket: BUCKET_NAME,
Key: `${bucketPath}/${filename}`,
Expires: 60,
ACL: 'public-read'
}

try {
signedUrl = await s3.getSignedUrlPromise('putObject', params);
} catch (err) {
console.error(`Error generating pre-signed url: ${err.message}`);
throw new Error('Error generating pre-singed url');
}

return {signedUrl, publicUrl};
}

/**
*
@description generates a public URL
*
@param filename File name including file extension
*
@param bucketPath Folder path to file
*
@returns {string}
*/
const getPublicUrl = (filename, bucketPath) => {
const publicUrl = `https://s3.amazonaws.com/${BUCKET_NAME}/${bucketPath}/${filename}`

return publicUrl;
}

module.exports = {generateUrl};

Explanation:

  1. I’ve initialized new AWS and S3 instances.
  2. I’ve declared a constant called BUCKET_NAME with takes the bucket name from the BUCKET_NAME environment variable.
  3. I’ve defined a function called generateUrl which receives a filename, which is a string of the filename that you want to store in S3, and bucketPath which is a string that describes the folders tree to the file in S3.
    In that function, I’ve defined a constant with the name params which describes the parameters for S3.getSignedUrlPromise function and include the bucket name, the key for the requested file, the amount of time that the URL will be valid in seconds (expires), and the access control list (ACL).
  4. Another function I’ve defined is getPublicUrl which was in use in the last function and receives a filename and bucketPath, both are strings and return a public URL to the requested file.

Define a route:

The next step is to create a route that will handle the requests and will create the signed URL.
For that, create a new file under src/routes folder and call it files.js, and enter the following code into it:

const express = require('express');
const router = express.Router();
const filesService = require('../services/filesService');

router.get('/', async (req, res) => {
const {filename, path} = req.query;
const urls = await filesService.generateUrl(filename, path);

res.send({urls});
})

module.exports = router;

Explanation:

  1. I’ve initialized a new route that will handle the routes declarations with the name router.
  2. I’ve imported the early created files service.
  3. I’ve defined a new GET route to the ‘/’ endpoint, exported filename and path from the request’s query, generated the signed and public URLs using the imported files service from the last step, and returned it to the client.

Next, all we have to do is to add the newly created route to our routes map, to do so, enter src/routes/index.js file, import the newly created router into it add a new endpoint called ‘files’ and bind that router into it.

const express = require('express');
const router = express.Router();

const files = require('./files');

router.use('/files', files);

module.exports = router;

Now it’s we’re ready on the server-side.

Handling the client-side:

Now, after we’ve already finished with the server-side, we have to handle the client-side requests and processes.

For this, create a new react application using the CRA and enter it:

npx create-react-app signed_url_demo

Once it’s done initializing your application, enter the terminal and install the axios npm package, which will help us to handle HTTP requests:

npm install --save axios 

Next, we have to display a file input field on the main page.
For that, enter app.js file and replace the HTML with the following:

<div className="header">
<input type="file" id="file_input"/>
</div>

Next, we’ll have to handle the changes on this input.
To do so, let’s create a new function called onFileInput which will handle the onChange functionality of the input:

const onFileInput = async (e) => {
const timestamp = new Date().getTime();
const file = e.target.files[0];
const filename = file.name.split('.')[0].replace(/[&\/\\#,+()$~%'":*?<>{}]/g, '').toLowerCase() + `_${timestamp}`;
const fileExtension = file.name.split('.').pop();

await uploadImage(`${filename}.${fileExtension}`, file);
}

Explanation:

  1. I want to make a unique file name every time and to prevent overriding an existing file on S3, for this I’ve defined a variable called timestamp that hold the current timestamp and will be concatenated to the file name.
  2. I’ve initialized a new variable called file which holds the chosen file.
  3. I’ve initialized a new variable called filename which holds the name of the file without its extension (pdf, png, csv, jpg, etc..) and I removed the special characters from it if any.
  4. Next, I’ve initialized another variable called fileExtension to hold the extension of the file.
  5. The last thing is to call another function that will actually hold the image uploading called uploadImage which receives the new filename (with timestamp) and the actual file.

Now, after we created that function, we have to define the uploadImage function which is actually going to upload the function and use the backend server we created earlier.
For this, I’ve created it at the top of the App.js declaration a new state called imageUrl which is going to hold the public URL of the file, and initialized it to null since we don’t have an image URL when the page is rendered.

Add the following to your code:

const [imageUrl, setImageUrl] = useState(null);

uploadImage:

const uploadImage = async (filename, file) => {
const foldersPath = 'tests/test1';
const options = { headers: { 'Content-Type': file.type } };

try {
const s3Urls = await axios.get(
`http://localhost:3001/api/v1/files?filename=${filename}&path=${foldersPath}&contentType=${file.type}`
).then(response => response.data?.urls);

if (!s3Urls.signedUrl) {
throw new Error('S3 signed url is not defined');
}

await axios.put(s3Urls.signedUrl, file, options);

setImageUrl(s3Urls.publicUrl);
} catch (err) {
console.error(`Error uploading image: ${err.message}`);
}
}

Explanation:

  1. I’ve defined a new variable called foldersPath which holds the requested folders tree to the file on S3.
  2. I’ve defined a new variable called options which holds the header ContentType with the type of the chosen file.
  3. Next, I’ve used axios to send a request to our API with the filename and the requested path and stored the response in s3Urls variable.
  4. Now, it’s a very important part, I’ve used the returned signedUrl from our backend server and sent a PUT request to it with the chosen file and the options variable.
  5. Next, I’ve set the publicUrl to imageUrl state we’ve defined above.

Next, all we have to do is to place the onFileInput function to receive the events on the onChange for the input field and to display the imageUrl when it’s available under the input field, for that, replace the HTML on your app with the following.

<div className="header">
<input type="file" id="file_input" onChange={onFileInput} />
{imageUrl && <div className="result">
<a href={imageUrl} className="image-url" target="_blankד">Uploaded Image</a>
</div>
}
</div>

Now, run both your backend server and your React application, and everything should work like a charm.

Result image

Sign up to discover human stories that deepen your understanding of the world.

--

--

Raz Levy
Raz Levy

Written by Raz Levy

Senior Full Stack developer, expert in Vanilla JS, React JS, Node JS, TypeScript and Cloud.

No responses yet

Write a response