NestJS. Uploading files to S3 storage (minio)

NestJS is a framework for building efficient, scalable server-side applications on the Node.js platform. You may come across the claim that NestJS is a platform independent framework. It means that it can work on the basis of one of two frameworks of your choice: NestJS + Express or NestJS + Fastify. This is really so, or almost so. This platform independence ends with the handling of Content-Type: multipart / form-data requests. That is, practically on the second day of development. And this is not a big problem if you are using the NestJS + Express platform - there is an example of how Content-Type: multipart / form-data works in the documentation. There is no such example for NestJS + Fastify, and there are not so many examples on the net. And some of these examples follow a very complicated path.



Choosing between the NestJS + Fastify and NestJS + Express platform, I made a choice towards NestJS + Fastify. Knowing the inclination of developers in any incomprehensible situation to hang additional properties on the req object in Express and so communicate between different parts of the application, I firmly decided that Express will not be in the next project.



I only needed to solve a technical issue with Content-Type: multipart / form-data. Also, I planned to save the files received through Content-Type: multipart / form-data requests in the S3 storage. In this regard, the implementation of Content-Type: multipart / form-data requests on the NestJS + Express platform confused me that it did not work with streams.



Launching S3 Local Storage



S3 is a data store (one might say, although not strictly speaking, a file store) accessible over the http protocol. S3 was originally provided by AWS. The S3 API is currently supported by other cloud services as well. But not only. There are S3 server implementations that you can bring up locally to use during development, and possibly bring your S3 servers up to production.



First, you need to decide on the motivation for using S3 data storage. In some cases, this can reduce costs. For example, you can take the slowest and cheapest S3 storage for storing backups. Fast storages with high traffic (traffic is charged separately) for loading data from storage will probably cost comparable to SSD drives of the same size.



A more powerful motive is 1) scalability - you do not need to think about the fact that disk space may run out, and 2) reliability - the servers work in a cluster and you do not need to think about backup, since the required number of copies is always available.



To raise the implementation of S3 servers - minio - locally you only need docker and docker-compose installed on the computer. Corresponding docker-compose.yml file:



version: '3'
services:
  minio1:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - ./s3/data1-1:/data1
      - ./s3/data1-2:/data2
    ports:
      - '9001:9000'
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
      interval: 30s
      timeout: 20s
      retries: 3

  minio2:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - ./s3/data2-1:/data1
      - ./s3/data2-2:/data2
    ports:
      - '9002:9000'
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
      interval: 30s
      timeout: 20s
      retries: 3

  minio3:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - ./s3/data3-1:/data1
      - ./s3/data3-2:/data2
    ports:
      - '9003:9000'
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
      interval: 30s
      timeout: 20s
      retries: 3

  minio4:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - ./s3/data4-1:/data1
      - ./s3/data4-2:/data2
    ports:
      - '9004:9000'
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
      interval: 30s
      timeout: 20s
      retries: 3


We start - and without any problems we get a cluster of 4 S3 servers.



NestJS + Fastify + S3



I will describe working with the NestJS server from the very first steps, although some of this material is perfectly described in the documentation. Installs CLI NestJS:



npm install -g @nestjs/cli


A new NestJS project is created:



nest new s3-nestjs-tut


The necessary packages are installed (including those needed to work with S3):




npm install --save @nestjs/platform-fastify fastify-multipart aws-sdk sharp
npm install --save-dev @types/fastify-multipart  @types/aws-sdk @types/sharp


By default, the project installs the NestJS + Express platform. How to install Fastify is described in the docs.nestjs.com/techniques/performance documentation . Additionally, we need to install a plugin for handling Content-Type: multipart / form-data - fastify-multipart



import { NestFactory } from '@nestjs/core';
import {
  FastifyAdapter,
  NestFastifyApplication,
} from '@nestjs/platform-fastify';
import fastifyMultipart from 'fastify-multipart';
import { AppModule } from './app.module';

async function bootstrap() {
  const fastifyAdapter = new FastifyAdapter();
  fastifyAdapter.register(fastifyMultipart, {
    limits: {
      fieldNameSize: 1024, // Max field name size in bytes
      fieldSize: 128 * 1024 * 1024 * 1024, // Max field value size in bytes
      fields: 10, // Max number of non-file fields
      fileSize: 128 * 1024 * 1024 * 1024, // For multipart forms, the max file size
      files: 2, // Max number of file fields
      headerPairs: 2000, // Max number of header key=>value pairs
    },
  });
  const app = await NestFactory.create<NestFastifyApplication>(
    AppModule,
    fastifyAdapter,
  );
  await app.listen(3000, '127.0.0.1');
}

bootstrap();


Now we will describe the service that uploads files to the S3 repository, having reduced the code for handling some types of errors (the full text is in the article repository):



import { Injectable, HttpException, BadRequestException } from '@nestjs/common';
import { S3 } from 'aws-sdk';
import fastify = require('fastify');
import { AppResponseDto } from './dto/app.response.dto';
import * as sharp from 'sharp';

@Injectable()
export class AppService {
  async uploadFile(req: fastify.FastifyRequest): Promise<any> {

    const promises = [];

    return new Promise((resolve, reject) => {

      const mp = req.multipart(handler, onEnd);

      function onEnd(err) {
        if (err) {
          reject(new HttpException(err, 500));
        } else {
          Promise.all(promises).then(
            data => {
              resolve({ result: 'OK' });
            },
            err => {
              reject(new HttpException(err, 500));
            },
          );
        }
      }

      function handler(field, file, filename, encoding, mimetype: string) {
        if (mimetype && mimetype.match(/^image\/(.*)/)) {
          const imageType = mimetype.match(/^image\/(.*)/)[1];
          const s3Stream = new S3({
            accessKeyId: 'minio',
            secretAccessKey: 'minio123',
            endpoint: 'http://127.0.0.1:9001',
            s3ForcePathStyle: true, // needed with minio?
            signatureVersion: 'v4',
          });
          const promise = s3Stream
            .upload(
              {
                Bucket: 'test',
                Key: `200x200_${filename}`,
                Body: file.pipe(
                  sharp()
                    .resize(200, 200)
                    [imageType](),
                ),
              }
            )
            .promise();
          promises.push(promise);
        }
        const s3Stream = new S3({
          accessKeyId: 'minio',
          secretAccessKey: 'minio123',
          endpoint: 'http://127.0.0.1:9001',
          s3ForcePathStyle: true, // needed with minio?
          signatureVersion: 'v4',
        });
        const promise = s3Stream
          .upload({ Bucket: 'test', Key: filename, Body: file })
          .promise();
        promises.push(promise);
      }
    });
  }
}


Of the features, it should be noted that we write an input stream into two output streams if a picture is loaded. One of the streams compresses the picture to a size of 200x200. In all cases, the stream style is used. But in order to catch possible errors and return them to the controller, we call the promise () method, which is defined in the aws-sdk library. We accumulate the received promises in the promises array:



        const promise = s3Stream
          .upload({ Bucket: 'test', Key: filename, Body: file })
          .promise();
        promises.push(promise);


And, further, we expect their resolution in the method Promise.all(promises).



The controller code, in which I still had to forward FastifyRequest to the service:



import { Controller, Post, Req } from '@nestjs/common';
import { AppService } from './app.service';
import { FastifyRequest } from 'fastify';

@Controller()
export class AppController {
  constructor(private readonly appService: AppService) {}

  @Post('/upload')
  async uploadFile(@Req() req: FastifyRequest): Promise<any> {
    const result = await this.appService.uploadFile(req);
    return result;
  }
}


The project is launched:



npm run start:dev


Article repository github.com/apapacy/s3-nestjs-tut



apapacy@gmail.com

August 13, 2020



All Articles