SaltyCrane Blog — Notes on JavaScript and web development

Next.js Relay GraphQL Pokemon example

Here is a quick and dirty Pokemon TCG web UI using Next.js, Relay, and the TCGdex GraphQL API. Initially this was supposed to be a proof-of-concept of the Next.js rewrites feature, but that doesn't work with the Next.js static export for the GitHub Pages deploy, so I removed it.

Source code on GitHub here: https://github.com/saltycrane/next-relay-graphql-pokemon-example

Deployed to GitHub Pages here: https://saltycrane.github.io/next-relay-graphql-pokemon-example/

Uses

Doesn't use

  • Next.js App Router or React Server Components
  • Next.js Server Side Rendering
  • Next.js Image Optimization
  • React Transitions
  • Relay Fragments

Example Node.js Passport.js SAML app using OneLogin

I put an example Express.js, Passport.js OneLogin SAML SSO authentication app on github here: https://github.com/saltycrane/express-passport-saml-example. I used the following:

OneLogin configuration

  • create OneLogin developer account here: https://developers.onelogin.com/
  • for example, use the domain your-domain
  • at https://your-domain-dev.onelogin.com/admin2/apps select "Add App" > "SAML Custom Connector (Advanced)"
  • on "Configuration" tab, set the following 5 fields:
    • "Audience (EntityID)" [1]: your-example-app
    • "Recipient": your-example-app
    • "ACS (Consumer) URL Validator*": http://localhost:3000/login/sso/callback
    • "ACS (Consumer) URL*": http://localhost:3000/login/sso/callback
    • "SAML signature event" [1]: "Both"

[1] required as of node-saml v4.0.0

Set environment variables

  • copy .env.example to .env and change the following:
    • SSO_ENTRYPOINT: "SSO" tab > "SAML 2.0 Endpoint (HTTP)"
    • SSO_CERT: "SSO" tab > "X.509 Certificate" > "View Details" > "X.509 Certificate" with "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" and newlines removed
    • SSO_COOKIE_SESSION_SECRET: generate or make up a secret string

Note: SSO_ISSUER should be "Recipient" on the "Configuration" tab and SSO_CALLBACK_URL should be "ACS (Consumer) URL*" on the "Configuration" tab.

Example .env

SSO_ENTRYPOINT='https://your-domain-dev.onelogin.com/trust/saml2/http-post/sso/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'

SSO_ISSUER='your-example-app'

SSO_CALLBACK_URL='http://localhost:3000/login/sso/callback'

SSO_CERT='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX='

SSO_COOKIE_SESSION_SECRET='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

Run app and test

Sequence of requests

  1. GET http://localhost:3000/login/sso
  2. GET https://your-domain-dev.onelogin.com/trust/saml2/http-post/sso/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  3. POST http://localhost:3000/login/sso/callback
  4. GET http://localhost:3000/ (with user session cookie)

Troubleshooting

"Access Denied You do not have access to this application. Please contact your administrator."
  • ensure your user is added to the default role for the app in the OneLogin admin UI.
Error: Invalid signature
  • in the OneLogin admin UI, in the "Configuration" tab, ensure that "SAML signature element" is set to "Both"
  • alternatively, as a less secure option, add the following configuration to the passport-saml Strategy: wantAssertionsSigned: false.
  • node-saml changed in v4.0.0 to require all assertions be signed. See https://github.com/node-saml/node-saml/pull/177
Error: SAML assertion AudienceRestriction has no Audience value
  • in node-saml, audience defaults to the value of issuer
  • in the OneLogin admin UI, in the "Configuration" tab, ensure that "Audience (EntityID)" is the same as issuer. (In this example it is the value of "Recipient", your-example-app)

CSS Subgrid demo

This is a demo I did on CSS Subgrid.

It is a Next.js React project using CSS Modules. (As mentioned in the video, I wanted to use vanilla HTML and CSS for the demo, but this was just easier for me since I'm familiar with it.)

The code in the repo is slightly different than the code in the YouTube demo because I converted from using PostgreSQL to SQLite so I could include the data in the repo.

Aphrodite to CSS Modules codemod

I wanted to convert our React project from Aphrodite to CSS Modules. The biggest impetus was that Aphrodite isn't supported by the new Next.js v13 app directory feature, which I'm excited to try. I like styled-components, but my co-worker likes CSS Modules and it's hard to go wrong with CSS Modules. CSS Modules also has built-in support in Next.js and it looks pretty good in this graphic from the State of CSS survey.

To ease the conversion, I wrote a jscodeshift codemod to automate most of the process. The codemod is on github here: aphrodite-to-css-modules-codemod. An example is below.

The codemod worked well for my 200 Aphrodite files. I did spend time manually converting JS constants into CSS variables. I also manually handled CSS precedence issues since Aphrodite handles precedence more nicely than CSS. But overall I was pretty happy with the results. (It was certainly more successful than my attempt at a reactstrap-to-react-bootstrap codemod which I never used.)

Before

./example/src/MyComponent.tsx:

import { css, StyleSheet } from "aphrodite";
import classNames from "classnames";
import React from "react";

import { colors } from "./constants";
import { hexToRgbA } from "./utils";

export default function MyComponent() {
  const isSomething = true;
  const isSomethingElse = false;
  return (
    <div
      className={css(
        isSomethingElse ? myStyles.containerGrid : myStyles.containerFlex,
      )}
      style={{}}
    >
      <div className={css(myStyles.header, myStyles.content)}>header</div>
      <div className={classNames(css(myStyles.content), "another-class")}>
        <div>Lorem ipsum</div>
      </div>
      <span className={css(isSomething && myStyles.warning)}></span>
    </div>
  );
}

// comment I
export const myStyles = StyleSheet.create({
  containerGrid: {
    backgroundColor: "white",
    // comment 1
    /* comment 2 */ display: "grid" /* comment 4 */, // comment 5
    gridTemplate: `
      "sourceselect .       reviewbutton" auto
      "pagination   filters filters     " auto
      "rowcount     filters filters     " 20px
      / 2fr         1fr     2fr
    `,
    width: 200,
  },
  containerFlex: {
    display: "flex",
  },
  content: {
    lineHeight: 1.5,
  },
  header: {
    backgroundColor: "#ccc",
    color: hexToRgbA(colors.danger, 0.8),
    display: "inline-block",
    ":hover": {
      color: colors.primary,
      borderColor: `${colors.info} !important`,
    },
  },
  // comment a
  warning: {
    fontWeight: 700,
    color: colors.warning,
    opacity: 0,
  } /* comment b */, // comment c
});

After

./example/src/MyComponent.tsx:

import myStyles from "./MyComponent.module.css";
import classNames from "classnames";
import React from "react";

import { colors } from "./constants";
import { hexToRgbA } from "./utils";

export default function MyComponent() {
  const isSomething = true;
  const isSomethingElse = false;
  return (
    <div
      className={
        isSomethingElse ? myStyles.containerGrid : myStyles.containerFlex
      }
      style={{}}
    >
      <div
        className={
          // TODO: check CSS precedence
          classNames(myStyles.header, myStyles.content)
        }
      >
        header
      </div>
      <div className={classNames(myStyles.content, "another-class")}>
        <div>Lorem ipsum</div>
      </div>
      <span className={classNames(isSomething && myStyles.warning)}></span>
    </div>
  );
}

export { myStyles };

./example/src/MyComponent.module.css:

/* comment I */
.containerGrid {
  background-color: white;
  /* comment 1 */
  /* comment 2 */
  display: grid; /* comment 4 */ /* comment 5 */
  grid-template: 
  "sourceselect .       reviewbutton" auto
  "pagination   filters filters     " auto
  "rowcount     filters filters     " 20px
  / 2fr         1fr     2fr
;
  width: 200px;
}

.containerFlex {
  display: flex;
}

.content {
  line-height: 1.5;
}

.header {
  background-color: #ccc;
  color: var(--bs-danger-alpha80);
  display: inline-block;
}

.header:hover {
  color: var(--bs-primary);
  border-color: var(--bs-info) !important;
}

/* comment a */
.warning {
  font-weight: 700;
  color: var(--bs-warning);
  opacity: 0;
} /* comment b */ /* comment c */

JS context file

The expressions in the styles object (e.g. colors.danger, hexToRgbA(colors.danger, 0.8), etc.) were evaluated using the following "context" file.

./context.example.js:

const colors = {
  danger: "var(--bs-danger)",
  info: "var(--bs-info)",
  primary: "var(--bs-primary)",
  warning: "var(--bs-warning)",
};

function hexToRgbA(hex, alpha) {
  return hex.replace(/\)$/, `-alpha${alpha * 100})`);
}

Simple codemod example with jscodeshift

jscodeshift codemods allow refactoring JavaScript or TypeScript code by manipulating the abstract syntax tree.

This is an example showing how to rename variables named foo to bar.

Install jscodeshift

npm install -g jscodeshift

Create an example file to modify

  • create a folder

    mkdir my-project
    cd my-project
    
  • create an example file, my-file-to-modify.js

    const foo = 1;
    console.log(foo);
    

Create a transform

create a file my-transform.js

module.exports = function transformer(fileInfo, api) {
  return api
    .jscodeshift(fileInfo.source)
    .find(api.jscodeshift.Identifier)
    .forEach(function (path) {
      if (path.value.name === "foo") {
        api.jscodeshift(path).replaceWith(api.jscodeshift.identifier("bar"));
      }
    })
    .toSource();
};

Run it

jscodeshift -t my-transform.js ./my-file-to-modify.js

The file my-file-to-modify.js now contains:

const bar = 1;
console.log(bar);

Another example

This example removes the React JSX element <MyHeader /> and removes the MyHeader import. I'm not sure why, but it added some extra parentheses. Prettier cleaned this up for me, but if you have an improvement, let me know.

// removeMyHeader.js
module.exports = function transformer(file, api) {
  const jscodeshift = api.jscodeshift;

  const withoutElement = jscodeshift(file.source)
    .find(jscodeshift.JSXElement)
    .forEach(function (path) {
      if (path.value.openingElement.name.name === "MyHeader") {
        path.prune();
      }
    })
    .toSource();

  const withoutImport = jscodeshift(withoutElement)
    .find(jscodeshift.ImportDefaultSpecifier)
    .forEach(function (path) {
      if (path.value.local.name === "MyHeader") {
        path.parentPath.parentPath.prune();
      }
    })
    .toSource();

  return withoutImport;
};

Here is a command to run it for a React TypeScript codebase:

jscodeshift --parser=tsx --extensions=tsx -t ./removeMyHeader.js ./src

AST Explorer

AST Explorer is a very helpful tool to experiment and learn the API with code completion. Go to https://astexplorer.net/ and select "jscodeshift" under the "Transform" menu.

lodash error

Error: Cannot find module 'lodash'

When running jscodeshift, I got the above error so I ran npm install -g lodash and this got rid of the error for me.

Buildtime vs runtime environment variables with Next.js and Docker

For a Next.js app, buildtime environment variables are variables that are used when the next build command runs. Runtime variables are variables used when the next start command runs.

Below are ways to set buildtime and rutime environment variables with Docker and ways to use buildtime and runtime environment variables with Next.js. Note the Dockerfile is written for simplicity to illustrate the examples. For a more optimized Next.js Docker build see my Docker multi-stage CI example.

Methods for setting environment variables with Docker

MethodAvailable at buildtimeAvailable at runtimeValue passed to docker buildValue passed to docker run
ARG
ENV
ARG + docker build --build-arg
ARG + ENV + docker build --build-arg
docker run --env

Methods for using environment variables in Next.js

MethodSet atAvailable in Next.js client side rendered code (browser)Available in Next.js server side rendered codeAvailable in Node.jsNotes
.env files?both?process.env cannot be destructured or accessed with dynamic properties
NEXT_PUBLIC_ prefixed vars in .env filesbuildtimeprocess.env cannot be destructured or accessed with dynamic properties
env in next.config.jsbuildtimeprocess.env cannot be destructured or accessed with dynamic properties
publicRuntimeConfigruntimeRequires page uses SSR
serverRuntimeConfigruntime
process.envruntime

Assume this package.json for the examples below

{
  "scripts": {
    "build": "next build",
    "dev": "next",
    "start": "next start"
  },
  "dependencies": {
    "next": "^10.0.9",
    "react": "^17.0.2",
    "react-dom": "^17.0.2"
  }
}

Setting static environment variables for buildtime and runtime

Environment variables can be specified with the ENV instruction in a Dockerfile. Below MY_VAR will be available to both next build and next start. For more information see https://docs.docker.com/engine/reference/builder/#env

Dockerfile

FROM node:14-alpine

ENV MY_VAR=cake

WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]

Docker build

docker build -t mytag .

Docker run

docker run mytag

Setting dynamic buildtime environment variables

Dynamic environment variables can be passed to the docker build command using --build-arg and used in the Dockerfile with the ARG statement. Below MY_VAR is an environment variable available to next build.

Note that MY_VAR is not available to next start. ARG statements act like ENV statements in that they are treated like environment variables during docker build, but they are not persisted in the image. To make them available during docker run (and next start) set the value using ENV (see the next example).

For more information see https://docs.docker.com/engine/reference/builder/#arg

Dockerfile

FROM node:14-alpine

ARG MY_VAR

WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]

Docker build

docker build --build-arg MY_VAR=cake -t mytag .

Docker run

docker run mytag

Setting dynamic buildtime environment variables that are available at runtime also

The variable in the previous example, set using ARG, is not persisted in the Docker image so it is not available at runtime. To make it available at runtime, copy the value from ARG to ENV.

Dockerfile

FROM node:14-alpine

ARG MY_VAR
ENV MY_VAR=$MYVAR

WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]

Docker build

docker build --build-arg MY_VAR=cake -t mytag .

Docker run

docker run mytag

Setting dynamic runtime environment variables

Dynamic environment variables can be passed to docker run using the --env flag. These will not be available to next build but they will be available to next start. For more information see https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file

Dockerfile

FROM node:14-alpine
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]

Docker build

docker build -t mytag .

Docker run

docker run --env MY_VAR=cake mytag

Using buildtime environment variables

To use buildtime environment variables in Next.js code, set them using env in next.config.js. Then access them via process.env in your app code. NOTE: process.env cannot be destructured or used with dynamic property access. Next.js does a string substituion at build time using the webpack DefinePlugin. For more information see https://nextjs.org/docs/api-reference/next.config.js/environment-variables

next.config.js

module.exports = {
  env: {
    MY_VAR: process.env.MY_VAR
  }
}

my-app-file.js

console.log(process.env.MY_VAR)

Using runtime environment variables (client-side or server-side)

To use runtime environment variables (client-side or server-side), set them using publicRuntimeConfig in next.config.js. Then access them using getConfig from next/config. NOTE: this only works for Next.js pages where server-side rendering (SSR) is used. i.e. the page must use getServerSideProps or getInitialProps. For more information see https://nextjs.org/docs/api-reference/next.config.js/runtime-configuration

next.config.js

module.exports = {
  publicRuntimeConfig: {
    MY_VAR: process.env.MY_VAR
  }
}

my-app-file.js

import getConfig from "next/config";
const { publicRuntimeConfig } = getConfig();
console.log(publicRuntimeConfig.MY_VAR)

Using runtime environment variables (server-side only)

To use runtime environment variables (server-side only), set them using serverRuntimeConfig in next.config.js. Then access them using getConfig from next/config. For more information see https://nextjs.org/docs/api-reference/next.config.js/runtime-configuration

NOTE: this applies to to files Next.js "builds". Server run files not processed by Next.js can use process.env to access environment variables. See below.

next.config.js

module.exports = {
  serverRuntimeConfig: {
    MY_VAR: process.env.MY_VAR
  }
}

my-app-file.js

import getConfig from "next/config";
const { serverRuntimeConfig } = getConfig();
console.log(serverRuntimeConfig.MY_VAR)

Using runtime environment variables server-side (not processed by Next.js)

For files not processed by Next.js (next build) (e.g. a server.js file run by node), runtime environment variables can be accessed on the server via process.env. NOTE: "runtime" variables here means variables used when the Node.js process runs. For more information see https://nodejs.org/docs/latest-v14.x/api/process.html#process_process_env

server.js

console.log(process.env.MY_VAR)

Next.js assetPrefix

If the Next.js assetPrefix is set in next.config.js using an environment variable, the environment variable should be set at buildtime for Next.js static pages but set at runtime for server rendered pages.

next.config.js

module.exports = {
  assetPrefix: process.env.MY_ASSET_PREFIX
}

Next.js GitLab CI/CD Docker multi-stage example

This describes an example Next.js project with a GitLab CI/CD pipeline that does the following:

  • installs npm packages and builds static assets
  • runs ESLint, TypeScript, and Cypress
  • builds a Docker image for deployment
  • pushes the Docker image to the GitLab Container Registry

This example prepares a Docker image for deployment but doesn't actually deploy it. See an example CI/CD pipeline that deploys to Amazon ECS.

To increase speed and reduce image size, it uses Docker multi-stage builds.

Dockerfile

The Dockerfile defines 3 stages:

  • the "builder" stage installs npm packages and builds static assets. It produces artifacts (/app and /root/.cache) that are used by the cypress and deploy stages. It is also used to build an image used to run ESLint and TypeScript.
  • the "cypress" stage uses a different base image from the "builder" stage and is used to run cypress tests
  • the final deploy stage copies the /app directory from the "builder" stage and sets NODE_ENV to "production" and exposes port 3000
ARG BASE_IMAGE=node:14.16-alpine

# ================================================================
# builder stage
# ================================================================
FROM $BASE_IMAGE as builder
ENV NODE_ENV=test
ENV NEXT_TELEMETRY_DISABLED=1
RUN apk add --no-cache bash git
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN CI=true npm ci
COPY . ./
RUN NODE_ENV=production npm run build

# ================================================================
# cypress stage
# ================================================================
FROM cypress/base:14.16.0 as cypress
WORKDIR /app
# copy cypress from the builder image
COPY --from=builder /root/.cache /root/.cache/
COPY --from=builder /app ./
ENV NODE_ENV=test
ENV NEXT_TELEMETRY_DISABLED=1

# ================================================================
# final deploy stage
# ================================================================
FROM $BASE_IMAGE
WORKDIR /app
COPY --from=builder /app ./
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
EXPOSE 3000
CMD ["npm", "start"]

.gitlab-ci.yml

  • 3 images are built: test, cypress, and deploy. The test image is used for running ESLint and TypeScript and is needed for cypress and deploy. The cypress image is used for running Cypress.
  • it uses Docker BuildKit to make caching easier. (With BuildKit, cached layers will be automatically pulled when needed. Without BuildKit, images used for caching need to be explicitly pulled.) For comparison, see this diff adding BuildKit. Note DOCKER_BUILDKIT is set to 1 to enable BuildKit.
variables:
  # enable docker buildkit. Used with `BUILDKIT_INLINE_CACHE=1` below
  DOCKER_BUILDKIT: 1
  DOCKER_TLS_CERTDIR: "/certs"
  IMAGE_TEST: $CI_REGISTRY_IMAGE/test:latest
  IMAGE_CYPRESS: $CI_REGISTRY_IMAGE/cypress:latest
  IMAGE_DEPLOY: $CI_REGISTRY_IMAGE/deploy:latest

stages:
  - build
  - misc
  - deploy

.base:
  image: docker:latest
  services:
    - docker:dind
  before_script:
    - docker --version
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"

build:builder:
  extends: .base
  stage: build
  script:
    - docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from "$IMAGE_TEST" --target builder -t "$IMAGE_TEST" .
    - docker push "$IMAGE_TEST"

build:deployimage:
  extends: .base
  stage: misc
  needs: ["build:builder"]
  script:
    - docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from "$IMAGE_DEPLOY" --cache-from "$IMAGE_TEST" --cache-from "$IMAGE_CYPRESS" -t "$IMAGE_DEPLOY" .
    - docker push "$IMAGE_DEPLOY"

test:cypress:
  extends: .base
  stage: misc
  needs: ["build:builder"]
  script:
    - docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from "$IMAGE_CYPRESS" --cache-from "$IMAGE_TEST" --target cypress -t "$IMAGE_CYPRESS" .
    - docker push "$IMAGE_CYPRESS"
    - docker run "$IMAGE_CYPRESS" npm run cy:citest

test:eslint:
  extends: .base
  stage: misc
  needs: ["build:builder"]
  script:
    - docker run "$IMAGE_TEST" npm run eslint

test:typescript:
  extends: .base
  stage: misc
  needs: ["build:builder"]
  script:
    - docker run "$IMAGE_TEST" npm run tsc

deploy:
  stage: deploy
  needs: ["build:deployimage", "test:cypress", "test:eslint", "test:typescript"]
  script:
    - echo "deploy here"

.dockerignore

Adding the .git directory to .dockerignore prevented cache invalidation for the COPY . ./ command in the Dockerfile.

.git

References

How to run Docker in Docker on Mac

Docker in Docker can be used in GitLab CI/CD to build Docker images. This is how to run Docker in Docker on Mac.

  • create directory

    mkdir /tmp/my-project
    cd /tmp/my-project
    
  • create docker-compose.yml file:

    version: "3"
    services:
      docker-daemon:
        container_name: "my-docker-daemon"
        environment:
          DOCKER_TLS_CERTDIR: ""
        image: "docker:dind"
        networks:
          "my-network":
            aliases:
              - "docker"
        privileged: true
      docker-client:
        command: sh -c 'while [ 1 ]; do sleep 1000; done'
        container_name: "my-docker-client"
        depends_on:
          - "docker-daemon"
        environment:
          DOCKER_HOST: "tcp://docker:2375"
        image: "docker:latest"
        networks:
          "my-network": {}
    
    networks:
      "my-network":
        name: "my-network"
    
  • run the docker daemon and client containers

    docker-compose up -d
    
  • run a shell in the client container

    docker exec -it my-docker-client sh
    
  • run a docker command in the docker client container

    / # docker ps
    CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
    

References

Next.js Cypress GitLab CI example

This is an example Next.js project that runs a Cypress test in Docker using a GitLab CI pipeline. It also uses the GitLab Container Registry for caching purposes.

.gitlab-ci.yml

variables:
  DOCKER_TLS_CERTDIR: "/certs"

stages:
  - test

test-cypress:
  stage: test
  image: docker:latest
  services:
    - docker:dind
  variables:
    IMAGE_TAG: $CI_REGISTRY_IMAGE:latest
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker pull $IMAGE_TAG || true
    - docker build --cache-from $IMAGE_TAG -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
    - docker run $IMAGE_TAG npm run cy:citest

Dockerfile

This uses the official Cypress Docker image (Dockerfile).

FROM cypress/base:14.16.0

WORKDIR /app
# run npm install before adding app code for better Docker caching
# https://semaphoreci.com/docs/docker/docker-layer-caching.html
COPY ./package.json /app
COPY ./package-lock.json /app
# CI=true suppresses Cypress progress log spam
RUN CI=true npm ci
COPY . /app
RUN npm run build

package.json

{
  "scripts": {
    "build": "next build",
    "cy:citest": "start-server-and-test start http://localhost:3000 cy:run",
    "cy:run": "cypress run",
    "dev": "next",
    "start": "next start"
  },
  "dependencies": {
    "next": "^10.0.9",
    "react": "^17.0.2",
    "react-dom": "^17.0.2"
  },
  "devDependencies": {
    "cypress": "^6.8.0",
    "start-server-and-test": "^1.12.1"
  }
}

cypress/integration/index_spec.js

describe("index page", () => {
  it("loads successfully", () => {
    cy.visit("http://localhost:3000");
    cy.contains("Index");
  });
});

References

Example Next.js GitLab CI/CD Amazon ECR and ECS deploy pipeline

I've created an example Next.js project with a GitLab CI/CD pipeline that builds a Docker image, pushes it to Amazon ECR, deploys it to an Amazon ECS Fargate cluster, and uploads static assets (JS, CSS, etc.) to Amazon S3. The example GitLab repo is here: https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example

Interesting files

Here are the interesting parts of some of the files. See the full source code in the GitLab repo.

  • .gitlab-ci.yml (view at gitlab)

    • the variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and ECR_HOST are set in the GitLab UI under "Settings" > "CI/CD" > "Variables"
    • this uses the saltycrane/aws-cli-and-docker Docker image which provides the aws v2 command line tools and docker in a single image. It is based on amazon/aws-cli and installs bc, curl, docker, jq, and tar. This idea is from Valentin's tutorial.
    variables:
      DOCKER_HOST: tcp://docker:2375
      DOCKER_TLS_CERTDIR: ""
      AWS_DEFAULT_REGION: "us-east-1"
      CI_APPLICATION_REPOSITORY: "$ECR_HOST/next-aws-ecr-ecs-gitlab-ci-cd-example"
      CI_APPLICATION_TAG: "$CI_PIPELINE_IID"
      CI_AWS_S3_BUCKET: "next-aws-ecr-ecs-gitlab-ci-cd-example"
      CI_AWS_ECS_CLUSTER: "next-aws-ecr-ecs-gitlab-ci-cd-example"
      CI_AWS_ECS_SERVICE: "next-aws-ecr-ecs-gitlab-ci-cd-example"
      CI_AWS_ECS_TASK_DEFINITION: "next-aws-ecr-ecs-gitlab-ci-cd-example"
      NEXT_JS_ASSET_URL: "https://$CI_AWS_S3_BUCKET.s3.amazonaws.com"
    
    stages:
      - build
      - deploy
    
    build:
      stage: build
      image: saltycrane/aws-cli-and-docker
      services:
        - docker:dind
      script:
        - ./bin/build-and-push-image-to-ecr
        - ./bin/upload-assets-to-s3
    
    deploy:
      stage: deploy
      image: saltycrane/aws-cli-and-docker
      services:
        - docker:dind
      script:
        - ./bin/ecs update-task-definition
    
  • Dockerfile (view at gitlab)

    The value of NEXT_JS_ASSET_URL is passed in using the --build-arg option of the docker build command run in bin/build-and-push-image-to-ecr. It is used like an environment variable in the RUN npm run build command below. In this project it is assigned to assetPrefix in next.config.js.

    FROM node:14.16-alpine
    ARG NEXT_JS_ASSET_URL
    ENV NODE_ENV=production
    WORKDIR /app
    COPY ./package.json ./
    COPY ./package-lock.json ./
    RUN npm ci
    COPY . ./
    RUN npm run build
    EXPOSE 3000
    CMD ["npm", "start"]
    
  • bin/build-and-push-image-to-ecr (view at gitlab)

    # log in to the amazon ecr docker registry
    aws ecr get-login-password | docker login --username AWS --password-stdin "$ECR_HOST"
    
    # build docker image
    docker pull "$CI_APPLICATION_REPOSITORY:latest" || true
    docker build --build-arg "NEXT_JS_ASSET_URL=$NEXT_JS_ASSET_URL" --cache-from "$CI_APPLICATION_REPOSITORY:latest" -t "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG"     -t "$CI_APPLICATION_REPOSITORY:latest" .
    
    # push image to amazon ecr
    docker push "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG"
    docker push "$CI_APPLICATION_REPOSITORY:latest"
    
  • bin/upload-assets-to-s3 (view at gitlab)

    LOCAL_ASSET_PATH=/tmp/upload-assets
    
    mkdir $LOCAL_ASSET_PATH
    
    # copy the generated assets out of the docker image
    docker run --rm --entrypoint tar "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG" cf - .next | tar xf - -C $LOCAL_ASSET_PATH
    
    # rename .next to _next
    mv "$LOCAL_ASSET_PATH/.next" "$LOCAL_ASSET_PATH/_next"
    
    # remove directories that should not be uploaded to S3
    rm -rf "$LOCAL_ASSET_PATH/_next/cache"
    rm -rf "$LOCAL_ASSET_PATH/_next/server"
    
    # gzip files
    find $LOCAL_ASSET_PATH -regex ".*\.\(css\|svg\|js\)$" -exec gzip {} \;
    
    # strip .gz extension off of gzipped files
    find $LOCAL_ASSET_PATH -name "*.gz" -exec sh -c 'mv $1 `echo $1 | sed "s/.gz$//"`' - {} \;
    
    # upload gzipped js, css, and svg assets
    aws s3 sync --no-progress $LOCAL_ASSET_PATH "s3://$CI_AWS_S3_BUCKET" --cache-control max-age=31536000 --content-encoding gzip --exclude "*" --include "*.js"     --include "*.css" --include "*.svg"
    
    # upload non-gzipped assets
    aws s3 sync --no-progress $LOCAL_ASSET_PATH "s3://$CI_AWS_S3_BUCKET" --cache-control max-age=31536000 --exclude "*.js" --exclude "*.css" --exclude "*.svg" --exclude "*.map"
    
  • bin/ecs (view full file) (This file was copied from the gitlab-org repo)

    #!/bin/bash -e
    
    update_task_definition() {
      local -A register_task_def_args=( \
        ['task-role-arn']='taskRoleArn' \
        ['execution-role-arn']='executionRoleArn' \
        ['network-mode']='networkMode' \
        ['cpu']='cpu' \
        ['memory']='memory' \
        ['pid-mode']='pidMode' \
        ['ipc-mode']='ipcMode' \
        ['proxy-configuration']='proxyConfiguration' \
        ['volumes']='volumes' \
        ['placement-constraints']='placementConstraints' \
        ['requires-compatibilities']='requiresCompatibilities' \
        ['inference-accelerators']='inferenceAccelerators' \
      )
    
      image_repository=$CI_APPLICATION_REPOSITORY
      image_tag=$CI_APPLICATION_TAG
      new_image_name="${image_repository}:${image_tag}"
    
      register_task_definition_from_remote
    
      new_task_definition=$(aws ecs register-task-definition "${args[@]}")
      new_task_revision=$(read_task "$new_task_definition" 'revision')
      new_task_definition_family=$(read_task "$new_task_definition" 'family')
    
      # Making sure that we at least have one running task (even if desiredCount gets updated again with new task definition below)
      service_task_count=$(aws ecs describe-services --cluster "$CI_AWS_ECS_CLUSTER" --services "$CI_AWS_ECS_SERVICE" --query "services[0].desiredCount")
    
      if [[ $service_task_count == 0 ]]; then
        aws ecs update-service --cluster "$CI_AWS_ECS_CLUSTER" --service "$CI_AWS_ECS_SERVICE" --desired-count 1
      fi
    
      # Update ECS service with newly created task defintion revision.
      aws ecs update-service \
                --cluster "$CI_AWS_ECS_CLUSTER" \
                --service "$CI_AWS_ECS_SERVICE" \
                --task-definition "$new_task_definition_family":"$new_task_revision"
    
      return 0
    }
    
    read_task() {
      val=$(echo "$1" | jq -r ".taskDefinition.$2")
      if [ "$val" == "null" ];then
        val=$(echo "$1" | jq -r ".$2")
      fi
      if [ "$val" != "null" ];then
        echo -n "${val}"
      fi
    }
    
    register_task_definition_from_remote() {
      task=$(aws ecs describe-task-definition --task-definition "$CI_AWS_ECS_TASK_DEFINITION")
      current_container_definitions=$(read_task "$task" 'containerDefinitions')
      new_container_definitions=$(echo "$current_container_definitions" | jq --arg val "$new_image_name" '.[0].image = $val')
      args+=("--family" "${CI_AWS_ECS_TASK_DEFINITION}")
      args+=("--container-definitions" "${new_container_definitions}")
      for option in "${!register_task_def_args[@]}"; do
        value=$(read_task "$task" "${register_task_def_args[$option]}")
        if [ -n "$value" ];then
          args+=("--${option}" "${value}")
        fi
      done
    }
    
    update_task_definition
    

Usage - set up AWS resources

Below are the minimum steps I needed to create the required AWS services for my example. I use the AWS region "us-east-1". For info about creating some of these services via the command line, see my Amazon ECS notes.

Create an ECR repository

Create an ECS Fargate cluster

Create an ECS task definition

  • click "Create new Task Definition" here: https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions
  • select "FARGATE" and click "Next step"
  • configure task
    • for "Task Definition Name" enter "next-aws-ecr-ecs-gitlab-ci-cd-example"
    • for "Task Role" select "None"
    • for "Task execution role" select "Create new role"
    • for "Task memory" select "0.5GB"
    • for "Task CPU" select "0.25 vCPU"
    • click "Add container"
      • for "Container Name" enter "next-aws-ecr-ecs-gitlab-ci-cd-example"
      • for "Image" enter "asdf" (this will be updated by the gitlab ci/cd job)
      • leave "Private repository authentication" unchecked
      • for "Port mappings" enter "3000"
      • click "Add"
    • click "Create"

Create an ECS service

  • click "Create" here: https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/services
  • configure service
    • for "Launch type" select "FARGATE"
    • for "Task Definition" enter "next-aws-ecr-ecs-gitlab-ci-cd-example"
    • for "Cluster" select "next-aws-ecr-ecs-gitlab-ci-cd-example"
    • for "Service name" enter "next-aws-ecr-ecs-gitlab-ci-cd-example"
    • for "Number of tasks" enter 1
    • for "Deployment type" select "Rolling update"
    • click "Next step"
  • configure network
    • select the appropriate "Cluster VPC" and two "Subnets"
    • click "Next step"
  • set Auto Scaling
    • click "Next step"
  • review
    • click "Create Service"

Open port 3000

Create a S3 bucket

  • click "Create bucket" here: https://s3.console.aws.amazon.com/s3/home?region=us-east-1
  • for "Bucket name" enter "next-aws-ecr-ecs-gitlab-ci-cd-example"
  • uncheck "Block all public access"
  • check the "I acknowledge that the current settings might result in this bucket and the objects within becoming public" checkbox
  • click "Create bucket"

Update permissions for S3 bucket

Create an IAM user

  • create an IAM user. The user must have at least ECR, ECS, and S3 permissions.
  • take note of the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Usage - run the CI/CD pipeline

Fork the example gitlab repo and configure CI/CD variables

  • fork https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example
  • go to "Settings" > "CI/CD" > "Variables" and add the following variables. You can choose to "protect" and "mask" all of them.
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • ECR_HOST (This is the part of the ECR repository URI before the /. It looks something like XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com)

Edit variables in .gitlab-ci.yml

If you used names other than "next-aws-ecr-ecs-gitlab-ci-cd-example", edit the variables in .gitlab-ci.yml.

Test it

References (build/push)

References (deploy)