SaltyCrane Blog — Notes on JavaScript and web development

How to run PostgreSQL in Docker on Mac (for local development)

These are my notes for running Postgres in a Docker container for use with a local Django or Rails development server running on the host machine (not in Docker). Running in Docker allows keeping my database environment isolated from the rest of my system and allows running multiple versions and instances. (I previously had a problem where Homebrew upgraded Postgres when I didn't expect it to and my existing database became incompatible. Admittedly, I didn't know Homebrew well, but it was frustrating.) Disadvantages of Docker are it's another layer of abstraction to learn and interact with. We use Docker extensively at work, so from a mental overhead point of view, it's something I wanted to learn anyways. Currently I use the Homebrew Postgres for work, and Postgres in Docker for personal projects. I also wrote some notes on Postgres and Homebrew here.

Install Docker

Install Docker for Mac: https://docs.docker.com/docker-for-mac/install/.

Alternatively, you can install Docker using Homebrew: brew install homebrew/cask/docker

OPTION 1: Run Postgres using a single Docker command

Run a postgres container
$ docker run -d --name my_postgres -v my_dbdata:/var/lib/postgresql/data -p 54320:5432 -e POSTGRES_PASSWORD=my_password postgres:13

OPTION 2: Run Postgres using Docker Compose

Create a docker-compose.yml file
$ mkdir /tmp/myproject
$ cd /tmp/myproject

Create a new file docker-compose.yml:

version: "3"
services:
  db:
    image: "postgres:13"
    container_name: "my_postgres"
    environment:
      POSTGRES_PASSWORD: "my_password"
    ports:
      - "54320:5432"
    volumes:
      - my_dbdata:/var/lib/postgresql/data
volumes:
  my_dbdata:
Start Postgres

Pull the postgres image from hub.docker.com, create a container named "my_postgres", and start it in the background:

$ docker-compose up -d

See that it's working

See the logs:

$ docker logs -f my_postgres

Try running psql:

$ docker exec -it my_postgres psql -U postgres

hit CTRL+D to exit

For other commands such as starting, stopping, listing or deleting, see my Docker cheat sheet.

Create a database

$ docker exec -it my_postgres psql -U postgres -c "create database my_database"

Connect using Python and psycopg2

$ python3 -m venv myenv
$ source myenv/bin/activate
$ pip install psycopg2-binary

Create a new file named myscript.py

import psycopg2

conn = psycopg2.connect(
    host='localhost',
    port=54320,
    dbname='my_database',
    user='postgres',
    password='my_password',
)
cur = conn.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS test (id serial PRIMARY KEY, num integer, data varchar);")
cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (100, "abcdef"))
cur.execute("SELECT * FROM test;")
result = cur.fetchone()
print(result)
conn.commit()
cur.close()
conn.close()

Run it

$ python myscript.py
(1, 100, 'abcdef')

Errors

  • docker: Error response from daemon: Conflict. The container name "/my_postgres" is already in use by container "b27594a414db369ec4876a07021c9ea738a55b3bc0a3ad5117158367131b99a2". You have to remove (or rename) that container to be able to reuse that name.

    If you get the above error, you can remove the container by running:

    $ docker rm my_postgres
    
  • Error response from daemon: You cannot remove a running container 7e94d205b6f4ef40ff885987f11e825e94eddbcd061481e591e07c87ed7cf86e. Stop the container before attempting removal or force remove

    If you get the above error, you can stop the container by running:

    $ docker stop my_postgres
    

See also

Magit in Spacemacs (evil-magit) notes

Magit and Org are two killer apps for Emacs. Here are my Magit notes using (the also excellent) Spacemacs (which uses evil-magit).

Contents

Show git status

  • SPC g s show Magit status view

Show help

  • SPC g s show Magit status view
  • ? get help

Show git log

  • SPC g s show Magit status view
  • l l show log view

Show all commits for the current file 1

  • SPC g f l show git log for the current file

Diff a range of commits

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor on a commit
  • V to select the line
  • use j and k to position the cursor on another commit
  • d r to show a diff of the range of commits

Checkout a local branch

  • SPC g s show Magit status view
  • b b checkout a branch
  • select or enter the branch name and hit ENTER

Checkout a commit

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor on a commit
  • b b ENTER to checkout that commit

Checkout a different revision of a file

  • SPC g s show Magit status view
  • l l show log view
  • move point to the commit you want to checkout (using j and k)
  • O (capital letter O) f reset a file
  • hit ENTER to select the default revision selected above. (it will look something like master~4)
  • select a file
  • q to close the log view and see the file at the selected revision is staged

Open a different revision of a file 8

  • SPC g s show Magit status view
  • l l show log view
  • move point to the commit you want to checkout (using j and k)
  • SPC g f F (magit-find-file) to open a file at a revision
  • ENTER to use the selected commit
  • select the name of the file to open

Create a local branch from a remote branch

  • SPC g s show Magit status view
  • b c create a branch
  • select or enter the remote branch and hit ENTER
  • hit ENTER to use the same name or enter a new name and hit ENTER

Pull from upstream

  • SPC g s show Magit status view
  • F u pull from upstream

Push to upstream

  • SPC g s show Magit status view
  • P u push to upstream

Stage files and commit

  • SPC g s show Magit status view
  • use j and k to position the cursor on a file
  • TAB to show and hide the diff for the file
  • s to stage a file (u to unstage a file and x to discard changes to a file)
  • c c to commit
  • write a commit message and save with SPC f s
  • , c to finish the commit message

Stage specific hunks 2

  • SPC g s show Magit status view
  • M-n / M-p to move to the "Unstaged changes" section
  • j / k to move to the desired file
  • TAB to expand the hunks in the file
  • M-n / M-p to move to different hunks
  • s / u to stage or unstange hunks
  • x to discard a hunk
  • c c to commit
  • Enter a commit message and save with SPC f s
  • , c to finish the commit

Merge master into the current branch

  • SPC g s show Magit status view
  • m m merge
  • select or enter master and hit ENTER

Rebase the current branch onto master

  • SPC g s show Magit status view
  • r e rebase
  • select or enter master and hit ENTER

Use interactive rebase to squash commits

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor on a commit
  • r i to start the interactive rebase
  • use j and k to position the cursor on a commit to squash
  • s to mark the commit as to be squashed. (use s multiple times to squash multiple commits.)
  • , c to make it happen
  • edit the new squashed commit message and save with SPC f s
  • , c to finish

Use interactive rebase to reorder commits

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor on a commit
  • ri to start the interactive rebase
  • use j and k to position the cursor on a commit to reorder
  • use M-k or M-j to move the commit up or down
  • , c to make it happen

Revert a commit

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor on the commit you want to revert
  • _ O (capital letter O) to revert the commit
  • edit the commit message and save with SPC f s
  • , c to finish

(Soft) reset the last commit

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor one commit before the last one
  • O (capital letter O) s to soft reset
  • the selected commit should be e.g. master~1. Hit ENTER

Stash changes

  • SPC g s show Magit status view
  • z z stash changes
  • enter stash message and hit ENTER

Pop stash

  • SPC g s show Magit status view
  • z p pop from stash
  • select the stash to pop and hit ENTER

Copy git commit SHA 3

  • SPC g s show Magit status view
  • l l show log view
  • use j and k to position the cursor on a commit
  • y s copy the git commit SHA

Copy text from a Magit buffer 4

  • SPC g s show Magit status view
  • \ switch to text mode
  • copy text using normal vim keystrokes
  • \ switch back to Magit mode

Run a shell command 5

  • SPC g s show Magit status view
  • ! s run a shell command
  • enter a command to run and hit ENTER

List all branches 6

  • SPC g s show Magit status view
  • y r show refs

Jump to the next/prev section in the status view 7

  • SPC g s show Magit status view
  • g j jump to the next section
  • g k jump to the previous section

Create a worktree from an existing branch

  • SPC g s show Magit status view
  • ? % b create a worktree from an existing branch
  • select or enter the branch name and hit ENTER
  • enter the path for the worktree and hit ENTER

Switch to a worktree

  • SPC g s show Magit status view
  • ? % g visit worktree
  • select or enter the path of the worktree and hit ENTER

Delete a worktree

  • SPC g s show Magit status view
  • ? % k delete a worktree
  • select or enter the path of the worktree and hit ENTER

References

  1. https://twitter.com/iLemming/status/1058507342830923776
  2. https://github.com/emacs-evil/evil-magit
  3. https://twitter.com/a_simpson/status/749316494224265216
  4. https://twitter.com/iLemming/status/986074309234802688
  5. https://twitter.com/_wilfredh/status/689955624080248833
  6. https://emacs.stackexchange.com/a/27148
  7. https://www.youtube.com/watch?v=j-k-lkilbEs
  8. https://emacs.stackexchange.com/questions/7655/how-can-i-open-a-specific-revision-of-a-file-with-magit/7683#7683

Caching a filtered list of results w/ Redux, React Router, and redux-promise-memo

This post shows how to cache API data for a React + Redux application using ideas from my library, redux-promise-memo. The example app displays a filtered list of vehicles, a sidebar with make and model filters, and a detail page for each vehicle. Caching is used to prevent re-fetching API data when navigating between the detail pages and the list page.

In addition to React and Redux, this example uses React Router and redux-promise-middleware, though alternatives like redux-pack or Gluestick's promise middleware can be used also.

The example is broken into 3 sections: 1. Basic features (no caching), 2. Caching (manual setup), and 3. Caching with redux-promise-memo.

Basic features
  • filtering vehicles by make and model is done by the backend vehicles API
  • when a make is selected, the models API populates the models filter for the selected make
  • filter parameters (make and model) are stored in the URL query string to support deep linking to a page of filtered results and to support browser "back" and "forward" navigation
  • each vehicle detail page also has a unique route using the vehicle id in the route
Highlighted feature - caching
  • API responses are not re-fetched when moving back and forward between pages

Code for basic features (no caching)

The full example code is here on github. A demo is deployed here.

VehicleFilters.js
class VehiclesFilters extends React.Component {
  componentDidMount() {
    this._fetchData();
  }

  componentDidUpdate(prevProps) {
    if (this.props.query !== prevProps.query) {
      this._fetchData();
    }
  }

  _fetchData() {
    let { dispatch, query } = this.props;
    dispatch(actions.fetchModels(query.make));
  }

  render() {
    let { changeQuery, models, query } = this.props;
    return (
      <Container>
        <Select
          label="Make"
          onChange={e =>
            changeQuery({ make: e.currentTarget.value, model: "" })
          }
          options={["All makes", "Acura", "BMW", "Cadillac", "..."]}
          value={query.make || ""}
        />
        {query.make && (
          <Select
            label="Model"
            onChange={e => changeQuery({ model: e.currentTarget.value })}
            options={["All models", ...models]}
            value={query.model || ""}
          />
        )}
      </Container>
    );
  }
}

export default compose(
  withVehiclesRouter,
  connect(state => ({ models: state.models }))
)(VehiclesFilters);

The VehicleFilters component has <select> inputs for the "Make" and "Model" filters.

  1. when a user changes the make to "BMW", the changeQuery function adds the ?make=BMW query string to the URL
  2. when the query string is updated, componentDidUpdate calls fetchModels which calls the models API
  3. when the API responds, the models list is stored in Redux at state.models
  4. when the Redux state changes, the "Model" <select> is updated with the new list of models

Notes:

  • withVehiclesRouter is a higher-order component that adds the following props to VehicleFilters:

    • query - the parsed query string
    • changeQuery - a function used to update the query string

    See the implementation here

  • the route is the single source of truth for the make and model parameters. They are stored only in the route and not in Redux.

VehiclesList.js
class VehiclesList extends React.Component {
  componentDidMount() {
    this._fetchData();
  }

  componentDidUpdate(prevProps) {
    if (this.props.query !== prevProps.query) {
      this._fetchData();
    }
  }

  _fetchData() {
    let { dispatch, query } = this.props;
    dispatch(actions.fetchVehicles(query));
  }

  render() {
    let { isLoading, vehicles } = this.props;
    return (
      <Container>
        {isLoading ? (
          <Spinner />
        ) : (
          vehicles.map(vehicle => (
            <Link key={vehicle.id} to={`/vehicles/${vehicle.id}`}>
              <VehicleCard {...vehicle} />
            </Link>
          ))
        )}
      </Container>
    );
  }
}

export default compose(
  withVehiclesRouter,
  connect(state => ({
    isLoading: state.isLoading,
    vehicles: state.vehicles
  }))
)(VehiclesList);

The VehiclesList component gets data the same way the "Model" <select> input does.

  1. the previous VehicleFilters component updates the route query string with a make or model
  2. when the route query string is updated, this component's componentDidUpdate calls fetchVehicles which calls the vehicles API
  3. when the API responds, the vehicle list is stored in Redux at state.vehicles
  4. when the Redux state changes, this component is updated with the new list of vehicles.

Each vehicle card is wrapped with a react-router <Link>. Clicking the vehicle card navigates to a new route, /vehicles/{vehicleId}.

VehicleDetail.js
class VehicleDetail extends React.Component {
  componentDidMount() {
    let { dispatch, vehicleId } = this.props;
    dispatch(actions.fetchVehicle(vehicleId));
  }

  render() {
    let { isLoading, vehicle } = this.props;
    return isLoading ? <Spinner /> : <VehicleCard {...vehicle} />;
  }
}

export default compose(
  withVehiclesRouter,
  connect(state => ({
    isLoading: state.isLoading,
    vehicle: state.vehicle
  }))
)(VehicleDetail);

VehicleDetail gets data in the same way as the "Model" filter and VehiclesList. One difference is that it doesn't need to use componentDidUpdate because the API input parameter (vehicleId) never changes.

  1. to display a vehicle detail page, a vehicle <Link> is clicked in the previous VehiclesList component which changes the route to /vehicles/{vehicleId}
  2. when the route changes, this component is rendered and passed the vehicleId prop.
  3. when this component is rendered, componentDidMount calls fetchVehicle which calls the vehicle detail API

The withVehiclesRouter higher-order component takes match.params.vehicleId from react-router and passes it to VehicleDetail as vehicleId.

App.js
let store = createStore(reducer, applyMiddleware(promiseMiddleware()));

let VehiclesPage = () => (
  <React.Fragment>
    <VehiclesFilters />
    <VehiclesList />
  </React.Fragment>
);

let App = () => (
  <Provider store={store}>
    <BrowserRouter>
      <Switch>
        <Route component={VehicleDetail} path="/vehicles/:vehicleId" />
        <Route component={VehiclesPage} path="/vehicles" />
      </Switch>
    </BrowserRouter>
  </Provider>
);

This shows react-router route configuration for the app and also the addition of redux-promise-middleware.

actions.js
export let fetchModels = make => ({
  type: "FETCH_MODELS",
  payload: fakeModelsApi(make)
});

export let fetchVehicle = vehicleId => ({
  type: "FETCH_VEHICLE",
  payload: fakeVehicleApi(vehicleId)
});

export let fetchVehicles = params => ({
  type: "FETCH_VEHICLES",
  payload: fakeVehiclesApi(params)
});
reducers.js
let isLoading = (state = false, action) => {
  switch (action.type) {
    case "FETCH_VEHICLE_PENDING":
    case "FETCH_VEHICLES_PENDING":
      return true;
    case "FETCH_VEHICLE_FULFILLED":
    case "FETCH_VEHICLES_FULFILLED":
      return false;
    default:
      return state;
  }
};

let models = (state = [], action) => {
  switch (action.type) {
    case "FETCH_MODELS_FULFILLED":
      return action.payload;
    default:
      return state;
  }
};

let vehicle = (state = [], action) => {
  switch (action.type) {
    case "FETCH_VEHICLE_FULFILLED":
      return action.payload;
    default:
      return state;
  }
};

let vehicles = (state = [], action) => {
  switch (action.type) {
    case "FETCH_VEHICLES_FULFILLED":
      return action.payload;
    default:
      return state;
  }
};

export default combineReducers({
  isLoading,
  models,
  vehicle,
  vehicles
});

These are the actions and reducers that are using redux-promise-middleware.

  • fakeModelsApi, fakeVehicleApi, and fakeVehiclesApi are meant to mimick a HTTP client like fetch or axios. They return a promise that resolves with some canned data after a 1 second delay.
  • the models, vehicle, and vehicles reducers store the API responses in the Redux state

Caching (manual setup)

In the above setup, API calls are made unecessarily when navigating back and forth between vehicle details pages and the main vehicles list. The goal is to eliminate the uncessary calls.

The vehicle data is already "cached" in Redux. But the app needs to be smarter about when to fetch new data. In some apps, a component can check if API data exisits in Redux and skip the API call if it is present. In this case, however, doing this would not allow updating the results when the make or model filters change.

The approach I took in redux-promise-memo was to fetch only when API input parameters changed:

  • the API input parameters (e.g. make, model) are stored in Redux
  • when deciding whether to make a new API call, the current API input paramters are tested to see if they match the parameters previously stored in Redux
  • if they match, the API call is skipped, and the data already stored in Redux is used

Below are changes that can be made to implement this idea without using the library. The solution with the redux-promise-memo library is shown at the end.

The full example code is here on github. A demo is deployed here.

actions.js
export let fetchVehicles = params => ({
  type: "FETCH_VEHICLES",
  payload: fakeVehiclesApi(params),
  meta: { params } // <= NEW: add the API params to the action
});
reducers.js
// NEW: add this vehiclesCacheParams reducer
let vehiclesCacheParams = (state = null, action) => {
  switch (action.type) {
    case "FETCH_VEHICLES_FULFILLED":
      return action.meta.params;
    default:
      return state;
  }
};

The actions and reducers are updated to store the API parameters.

  • in the fetchVehicles action creator, the API params is added to the action
  • a new reducer, vehiclesCacheParams, is added which stores those params in Redux when the API succeeds
VehiclesList.js
class VehiclesList extends React.Component {
  _fetchData() {
    let { cacheParams, dispatch, query } = this.props;

    // NEW: add this "if" statement to check if API params have changed
    if (JSON.stringify(query) !== JSON.stringify(cacheParams)) {
      dispatch(actions.fetchVehicles(query));
    }
  }

  // ...the rest is the same as before
}

export default compose(
  withVehiclesRouter,
  connect(state => ({
    cacheParams: state.vehiclesCacheParams, // <= NEW line here
    isLoading: state.isLoading,
    vehicles: state.vehicles
  }))
)(VehiclesList);
  • in VehiclesList, the vehicleCacheParams state is added to the connect call
  • then an "if" condition is added around the fetchVehicles action dispatch. JSON.stringify is used to compare arguments that are objects or arrays.

Caching using redux-promise-memo

To avoid adding this boilerplate for every API, I abstracted the above idea into the redux-promise-memo library.

  • it uses a reducer to store the API input parameters like the manual solution
  • it provides a memoize function to wrap promise-based action creators (like those created with redux-promise-middleware above)
  • the memoize wrapper checks if the API input arguments have changed. If the input arguments match what is stored in Redux, the API call is skipped.
  • it also stores the "loading" and "success" state of the API. It skips the API call if the previous API call is loading or sucessful. But it re-fetches if there was an error.
  • it has an option to support multiple caches per API. If data is stored in a different places in Redux per set of input arguments, this option can be used.
  • it provides support for "invalidating" the cache using any Redux actions.
  • it provides support for other libraries besides redux-promise-middleware. Custom matchers for other libraries can be written. Examples of matchers are here

The full example code is here on github. A demo is deployed here.

Install redux-promise-memo

redux-promise-memo uses redux-thunk so it needs to be installed as well.

npm install redux-promise-memo redux-thunk
index.js
import thunk from "redux-thunk";

let store = createStore(reducer, applyMiddleware(thunk, promiseMiddleware()));

Add redux-thunk to redux middleware

reducers.js
// NEW: remove the vehiclesCacheParams reducer

// NEW: add the _memo reducer
import {
  createMemoReducer,
  reduxPromiseMiddlewareConfig
} from "redux-promise-memo";

let _memo = createMemoReducer(reduxPromiseMiddlewareConfig);

let rootReducer = combineReducers({
  _memo,
  isLoading,
  models,
  vehicle,
  vehicles
});
  • create the redux-promise-memo reducer and add it to the root reducer. IMPORTANT: the Redux state slice must be named _memo for it to work. I tried to create a Redux enhancer to handle this automatically, but did not get it to work reliably with other libraries.
  • remove the *CacheParams reducers to store API input params from our manual solution
actions.js
import { memoize } from "redux-promise-memo";

let _fetchModels = make => ({
  type: "FETCH_MODELS",
  payload: fakeModelsApi(make)
  // NEW: remove the API params from the action
});
// NEW: wrap the action creator with `memoize`
export let memoizedFetchModels = memoize(_fetchModels, "FETCH_MODELS");

let _fetchVehicle = vehicleId => ({
  type: "FETCH_VEHICLE",
  payload: fakeVehicleApi(vehicleId)
});
export let memoizedFetchVehicle = memoize(_fetchVehicle, "FETCH_VEHICLE");

let _fetchVehicles = params => ({
  type: "FETCH_VEHICLES",
  payload: fakeVehiclesApi(params)
});
export let memoizedFetchVehicles = memoize(_fetchVehicles, "FETCH_VEHICLES");
  • wrap the action creators with the memoize higher order function
  • specify a "key" to be used to separate parameters in the reducer. The action type is recommended to be used as the key.
VehiclesList.js
class VehiclesList extends React.Component {
  componentDidUpdate(prevProps) {
    // NEW: removed "if" condition here
    this._fetchData();
  }

  _fetchData() {
    let { dispatch, query } = this.props;
    // NEW: removed "if" condition here
    dispatch(actions.memoizedFetchVehicles(query));
  }

  // ...the rest is the same as before
}

export default compose(
  withVehiclesRouter,
  connect(state => ({
    // NEW: removed the cacheParams line here
    isLoading: state.isLoading,
    vehicles: state.vehicles
  }))
)(VehiclesList);
  • update components to remove "if" conditions because the library does the check
  • remove the use of state.vehicleCacheParams

This solution should behave similarly to the manual solution with less boilerplate code. In the development environment only, it also logs console messages showing if the API is requesting, loading, or cached.

What does Redux's combineReducers do?

Redux uses a single root reducer function that accepts the current state (and an action) as input and returns a new state. Users of Redux may write the root reducer function in many different ways, but a recommended common practice is breaking up the state object into slices and using a separate sub reducer to operate on each slice of the state. Usually, Redux's helper utility, combineReducers is used to do this. combineReducers is a nice shortcut because it encourages the good practice of reducer composition, but the abstraction can prevent understanding the simplicity of Redux reducers.

The example below shows how a root reducer could be written without combineReducers:

Given a couple of reducers:

function apples(state, action) {
  // do stuff
  return state;
};

function bananas(state, action) {
  // do stuff
  return state;
};

This reducer created with combineReducers:

const rootReducer = combineReducers({ apples, bananas });

is equivalent to this reducer:

function rootReducer(state = {}, action) {
  return {
    apples: apples(state.apples, action),
    bananas: bananas(state.bananas, action),
  };
};

Usage without ES6 concise properties

The above example used ES6 concise properties, but combineReducers can also be used without concise properties. This reducer created with combineReducers:

const rootReducer = combineReducers({
  a: apples,
  b: bananas
});

is equivalent to this reducer:

function rootReducer(state = {}, action) {
  return {
    a: apples(state.a, action),
    b: bananas(state.b, action),
  };
};

Understanding how combineReducers works can be helpful in learning other ways reducers can be used.

References / see also

How to set up a React Apollo client with a Graphcool GraphQL server

Graphcool is a service similar to Firebase except it is used to create GraphQL APIs instead of RESTful ones. Apollo Client is a GraphQL client (alternative to Relay) that can be used with React (and other frontend frameworks). Below is how to create a Graphcool GraphQL service and query it from a React frontend using Apollo Client. I also used Next.js to set up the React project. I am running Node 8.4.0 on macOS Sierra. The code for this example is on github: https://github.com/saltycrane/graphcool-apollo-example.

Jump to: Graphcool setup, Next.js setup, or Apollo setup.

Graphcool GraphQL server

Graphcool Apollo Quickstart: https://www.graph.cool/docs/quickstart/frontend/react/apollo-tijghei9go

  • Install the Graphcool command-line tool (step 2 in quickstart)
    $ npm install -g graphcool-framework 
    
    $ graphcool-framework --version
    graphcool-framework/0.11.5 (darwin-x64) node-v8.4.0 
    
  • Create a Graphcool service (step 3 in quickstart)
    $ cd /tmp
    $ mkdir graphcool-apollo-example
    $ cd graphcool-apollo-example 
    
    $ graphcool-framework init myserver
    Creating a new Graphcool service in myserver... ✔
    
    Written files:
    ├─ types.graphql
    ├─ src
    │  ├─ hello.js
    │  └─ hello.graphql
    ├─ graphcool.yml
    └─ package.json
    
    To get started, cd into the new directory:
      cd myserver
    
    To deploy your Graphcool service:
      graphcool deploy
    
    To start your local Graphcool cluster:
      graphcool local up
    
    To add facebook authentication to your service:
      graphcool add-template auth/facebook
    
    You can find further instructions in the graphcool.yml file,
    which is the central project configuration.
    
  • Add a Post type definition. (step 4 in quickstart) Edit myserver/types.graphql:
    type Post @model {
      id: ID! @isUnique    # read-only (managed by Graphcool)
      createdAt: DateTime! # read-only (managed by Graphcool)
      updatedAt: DateTime! # read-only (managed by Graphcool)
    
      description: String!
      imageUrl: String!
    }
  • Deploy the Graphcool server (step 5 in quickstart). After running the deploy command, it will ask to select a cluster, select a name, and create an account with Graphcool.
    $ cd myserver
    $ graphcool-framework deploy
    ? Please choose the cluster you want to deploy to
    
    shared-eu-west-1
    
    Auth URL: https://console.graph.cool/cli/auth?cliToken=xxxxxxxxxxxxxxxxxxxxxxxxx&authTrigger;=auth
    Authenticating... ✔
    
    Creating service myserver in cluster shared-eu-west-1... ✔
    Bundling functions... 2.3s
    Deploying... 1.3s
    
    Success! Created the following service:
    
    Types
    
      Post
       + A new type with the name `Post` is created.
       ├─ +  A new field with the name `createdAt` and type `DateTime!` is created.
       ├─ +  A new field with the name `updatedAt` and type `DateTime!` is created.
       ├─ +  A new field with the name `description` and type `String!` is created.
       └─ +  A new field with the name `imageUrl` and type `String!` is created.
    
    Resolver Functions
    
      hello
       + A new resolver function with the name `hello` is created.
    
    Permissions
    
      Wildcard Permission
       ? The wildcard permission for all operations is added.
    
    Here are your GraphQL Endpoints:
    
      Simple API:        https://api.graph.cool/simple/v1/cjc2uk4kx0vzo01603rkov391
      Relay API:         https://api.graph.cool/relay/v1/cjc2uk4kx0vzo01603rkov391
      Subscriptions API: wss://subscriptions.graph.cool/v1/cjc2uk4kx0vzo01603rkov391 
    
  • Run some queries in the Graphcool playground (step 6 in quickstart). Run the following command to open a new browser tab with the Graphcool playground:
    $ graphcool-framework playground 
    
    Run a query to create a post:
    mutation {
      createPost(
        description: "A rare look into the Graphcool office"
        imageUrl: "https://media2.giphy.com/media/xGWD6oKGmkp6E/200_s.gif"
      ) {
        id
      }
    }
    Run a query to retrieve all posts:
    query {
      allPosts {
        id
        description
        imageUrl
      }
    }
  • Done with Graphcool server. Also see https://console.graph.cool/server/schema/types

Next.js React frontend

Set up a React frontend using the Next.js framework. Next.js setup: https://github.com/zeit/next.js#setup

  • Create a myclient directory alongside the myserver directory:
    $ cd /tmp/graphcool-apollo-example
    $ mkdir myclient
    $ cd myclient 
    
  • Create myclient/package.json:
    {
      "dependencies": {
        "next": "4.2.1",
        "react": "16.2.0",
        "react-dom": "16.2.0"
      },
      "scripts": {
        "dev": "next",
        "build": "next build",
        "start": "next start"
      }
    }
  • Install Next.js and React:
    $ npm install
    npm WARN deprecated [email protected]: this package has been reintegrated into npm and is now out of date with respect to npm
    npm WARN deprecated @semantic-release/[email protected]: Use @semantic-release/npm instead
    
    > [email protected] install /private/tmp/graphcool-apollo-example/myclient/node_modules/fsevents
    > node install
    
    [fsevents] Success: "/private/tmp/graphcool-apollo-example/myclient/node_modules/fsevents/lib/binding/Release/node-v57-darwin-x64/fse.node" already installed
    Pass --update-binary to reinstall or --build-from-source to recompile
    
    > [email protected] postinstall /private/tmp/graphcool-apollo-example/myclient/node_modules/uglifyjs-webpack-plugin
    > node lib/post_install.js
    
    npm notice created a lockfile as package-lock.json. You should commit this file.
    npm WARN myclient No description
    npm WARN myclient No repository field.
    npm WARN myclient No license field.
    
    + [email protected]
    + [email protected]
    + [email protected]
    added 838 packages in 20.012s
    
  • Create a Hello World page
    $ mkdir pages
    
    Create myclient/pages/index.js:
    const Home = () => <div>Hello World</div>;
    
    export default Home;
    
  • Run the Next.js dev server
    $ npm run dev 
    
    Go to http://localhost:3000 in the browser

Apollo Client

Set up Apollo Client to query the Graphcool server. Apollo Client setup: https://www.apollographql.com/docs/react/basics/setup.html

  • Install Apollo Client
    $ cd /tmp/graphcool-apollo-example/myclient 
    
    $ npm install apollo-client-preset react-apollo graphql-tag graphql 
    npm WARN [email protected] requires a peer of graphql@^0.11.0 but none is installed. You must install peer dependencies yourself.
    npm WARN myclient No description
    npm WARN myclient No repository field.
    npm WARN myclient No license field.
    
    + [email protected]
    + [email protected]
    + [email protected]
    + [email protected]
    added 20 packages in 5.476s
    
  • Set up Apollo Client. Edit myclient/pages/index.js:
    import { InMemoryCache } from "apollo-cache-inmemory";
    import { ApolloClient } from "apollo-client";
    import { HttpLink } from "apollo-link-http";
    import { ApolloProvider } from "react-apollo";
    
    const client = new ApolloClient({
      link: new HttpLink({
        // Replace this with your Graphcool server URL
        uri: "https://api.graph.cool/simple/v1/cjc2uk4kx0vzo01603rkov391",
      }),
      cache: new InMemoryCache(),
    });
    
    const Home = () => <div>Hello World</div>;
    
    const App = () => (
      <ApolloProvider client={client}>
        <Home />
      </ApolloProvider>
    );
    
    export default App;
    
  • Try running the dev server
    $ npm run dev 
    
    Go to http://localhost:3000 in the browser
  • But, get this error:
    Error: fetch is not found globally and no fetcher passed, to fix pass a fetch for
          your environment like https://www.npmjs.com/package/node-fetch.
    
          For example:
            import fetch from 'node-fetch';
            import { createHttpLink } from 'apollo-link-http';
    
            const link = createHttpLink({ uri: '/graphql', fetch: fetch });
    
        at warnIfNoFetch (/private/tmp/graphcool-apollo-example/myclient/node_modules/apollo-link-http/lib/httpLink.js:72:15)
        at createHttpLink (/private/tmp/graphcool-apollo-example/myclient/node_modules/apollo-link-http/lib/httpLink.js:89:5)
        at new HttpLink (/private/tmp/graphcool-apollo-example/myclient/node_modules/apollo-link-http/lib/httpLink.js:159:34)
        at Object. (/private/tmp/graphcool-apollo-example/myclient/.next/dist/pages/index.js:25:9)
          at Module._compile (module.js:573:30)
          at Module._compile (/private/tmp/graphcool-apollo-example/myclient/node_modules/source-map-support/source-map-support.js:492:25)
          at Object.Module._extensions..js (module.js:584:10)
          at Module.load (module.js:507:32)
          at tryModuleLoad (module.js:470:12)
          at Function.Module._load (module.js:462:3)
  • Install node-fetch. This is needed because Next.js runs on a Node server in addition to the browser.
    $ npm install node-fetch
    npm WARN [email protected] requires a peer of graphql@^0.11.0 but none is installed. You must install peer dependencies yourself.
    npm WARN myclient No description
    npm WARN myclient No repository field.
    npm WARN myclient No license field.
    
    + [email protected]
    updated 1 package in 4.028s
    
  • Update the code to use node-fetch as described in the error message:
    import { InMemoryCache } from "apollo-cache-inmemory";
    import { ApolloClient } from "apollo-client";
    import { createHttpLink } from "apollo-link-http";
    import gql from "graphql-tag";
    import fetch from "node-fetch";
    import { ApolloProvider } from "react-apollo";
    
    const client = new ApolloClient({
      link: createHttpLink({
        // Replace this with your Graphcool server URL
        uri: "https://api.graph.cool/simple/v1/cjc2uk4kx0vzo01603rkov391",
        fetch: fetch,
      }),
      cache: new InMemoryCache(),
    });
    
    class Home extends React.Component {
      componentDidMount() {
        client
          .query({
            query: gql`
              {
                allPosts {
                  id
                  description
                  imageUrl
                }
              }
            `,
          })
          .then(console.log);
      }
    
      render() {
        return <div>Look in the devtools console</div>;
      }
    }
    
    const App = () => (
      <ApolloProvider client={client}>
        <Home />
      </ApolloProvider>
    );
    
    export default App;
    
  • Try running the dev server again
    $ npm run dev 
    
    Go to http://localhost:3000 in the browser
  • It works. Open the browser devtools console and see the result of the query:
    {
      "data": {
        "allPosts": [
          {
            "id": "cjbffxjq5rrvd0192qmptpm2f",
            "description": "A rare look into the Graphcool office",
            "imageUrl": "https://media2.giphy.com/media/xGWD6oKGmkp6E/200_s.gif",
            "__typename": "Post"
          }
        ]
      },
      "loading": false,
      "networkStatus": 7,
      "stale": false
    }
  • Use the graphql higher-order component to make things nicer. Edit myclient/pages/index.js:
    import { InMemoryCache } from "apollo-cache-inmemory";
    import { ApolloClient } from "apollo-client";
    import { createHttpLink } from "apollo-link-http";
    import gql from "graphql-tag";
    import fetch from "node-fetch";
    import { ApolloProvider, graphql } from "react-apollo";
    
    const client = new ApolloClient({
      link: createHttpLink({
        // Replace this with your Graphcool server URL
        uri: "https://api.graph.cool/simple/v1/cjc2uk4kx0vzo01603rkov391",
        fetch: fetch,
      }),
      cache: new InMemoryCache(),
    });
    
    const MY_QUERY = gql`
      {
        allPosts {
          id
          description
          imageUrl
        }
      }
    `;
    
    const Home = ({ data }) => <pre>{JSON.stringify(data, null, 2)}</pre>;
    
    const HomeWithData = graphql(MY_QUERY)(Home);
    
    const App = () => (
      <ApolloProvider client={client}>
        <HomeWithData />
      </ApolloProvider>
    );
    
    export default App;
    
  • Run the dev server
    $ npm run dev 
    
    Go to http://localhost:3000 in the browser and see this result on the page:
    {
      "variables": {},
      "loading": false,
      "networkStatus": 7,
      "allPosts": [
        {
          "id": "cjbffxjq5rrvd0192qmptpm2f",
          "description": "A rare look into the Graphcool office",
          "imageUrl": "https://media2.giphy.com/media/xGWD6oKGmkp6E/200_s.gif",
          "__typename": "Post"
        }
      ]
    }

Docker cheat sheet

An image is a read-only template with instructions for creating a Docker container.
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container. A container is a process which runs on a host. ...the container process that runs is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.

Listing

Removing

Pulling images

Publishing images

See also: Get started with Docker - Share you image

Building images from Dockerfiles

Creating containers

Starting / stopping containers

Running containers

docker run is a combination of (optionally) docker pull, docker create, and docker start. See also Docker run reference.

volumes
ports & networking

Interacting with containers

Getting information

* docker-compose commands are shaded in gray. They assume a docker-compose.yml file in the current directory.

See also


  1. Reference: Cloning Docker data volumes. See also: Docker volumes guide: Backup, restore, or migrate data volumes. [back]

How to map Caps Lock to Escape when tapped and Control when held on Mac OS Sierra

Escape and Control are useful keys when using Vim so it's nice to map them to a more convenient key like Caps Lock. I had been using Karabiner to do this, but Karabiner doesn't work on Mac OS Sierra. Fortunately Karabiner-Elements provides a subset of the features planned for the next generation Karabiner including remapping Caps Lock to Escape when tapped and Control when held down. The solution below is from @zeekay on issue #8. I am using Karabiner-Elements 0.91.12 and macOS Sierra 10.12.4.

  • Uninstall Seil and Karabiner, if previously installed
  • Install Karabiner-Elements
    $ brew cask install karabiner-elements 
    
  • Edit ~/.config/karabiner/karabiner.json to be:
    {
        "global": {
            "check_for_updates_on_startup": true,
            "show_in_menu_bar": true,
            "show_profile_name_in_menu_bar": false
        },
        "profiles": [
            {
                "complex_modifications": {
                    "parameters": {
                        "basic.to_if_alone_timeout_milliseconds": 250
                    },
                    "rules": [
                        {
                            "manipulators": [
                                {
                                    "description": "Change caps_lock to control when used as modifier, escape when used alone",
                                    "from": {
                                        "key_code": "caps_lock",
                                        "modifiers": {
                                            "optional": [
                                                "any"
                                            ]
                                        }
                                    },
                                    "to": [
                                        {
                                            "key_code": "left_control"
                                        }
                                    ],
                                    "to_if_alone": [
                                        {
                                            "key_code": "escape",
                                            "modifiers": {
                                                "optional": [
                                                    "any"
                                                ]
                                            }
                                        }
                                    ],
                                    "type": "basic"
                                }
                            ]
                        }
                    ]
                },
                "devices": [],
                "fn_function_keys": {
                    "f1": "display_brightness_decrement",
                    "f10": "mute",
                    "f11": "volume_decrement",
                    "f12": "volume_increment",
                    "f2": "display_brightness_increment",
                    "f3": "mission_control",
                    "f4": "launchpad",
                    "f5": "illumination_decrement",
                    "f6": "illumination_increment",
                    "f7": "rewind",
                    "f8": "play_or_pause",
                    "f9": "fastforward"
                },
                "name": "Default profile",
                "selected": true,
                "virtual_hid_keyboard": {
                    "caps_lock_delay_milliseconds": 0,
                    "keyboard_type": "ansi"
                }
            }
        ]
    }

Alternative

@kbussell created a script to do the same using Hammerspoon1 instead of Karabiner.


  1. Hammerspoon is awesome.

postgres on Mac OS (homebrew) notes

Here are my notes on running postgresql on macOS Mojave installed with Homebrew. I also wrote some notes on running postgres in Docker here.

install postgres

$ brew install postgresql

start postgres

$ brew services start postgresql

restart postgres

$ brew services restart postgresql

postgres version

$ postgres --version
postgres (PostgreSQL) 11.1

check postgres is running (option 1)

$ brew services list
Name       Status  User      Plist
postgresql started eliot /Users/eliot/Library/LaunchAgents/homebrew.mxcl.postgresql.plist

check postgres is running (option 2)

$ ps -ef | grep postgres
  502   629     1   0 10Jan19 ??         0:12.39 /usr/local/opt/postgresql/bin/postgres -D /usr/local/var/postgres
  502   715   629   0 10Jan19 ??         0:00.29 postgres: checkpointer
  502   716   629   0 10Jan19 ??         0:02.39 postgres: background writer
  502   717   629   0 10Jan19 ??         0:02.32 postgres: walwriter
  502   718   629   0 10Jan19 ??         0:09.99 postgres: autovacuum launcher
  502   719   629   0 10Jan19 ??         0:45.17 postgres: stats collector
  502   720   629   0 10Jan19 ??         0:00.22 postgres: logical replication launcher
  502 73227 19287   0  8:43PM ttys003    0:00.00 grep postgres

postgres log file

/usr/local/var/log/postgres.log

postgres data directory

/usr/local/var/postgres

psql commands

Start psql:
$ psql postgres
Help with psql commands:
postgres=# \?
List databases:
postgres=# \l
Connect to a database:
postgres=# \c mydatabase
List tables:
mydatabase=# \d

What are Redux selectors? Why use them?

Selectors are functions that take Redux state as an argument and return some data to pass to the component.

They can be as simple as:

const getDataType = state => state.editor.dataType;

Or they can do more complicated data transformations like filter a list.

They are typically used in mapStateToProps1 in react-redux's connect:

For example this

export default connect(
  (state) => ({
    dataType: state.editor.dataType,
  })
)(MyComponent);

becomes this:

export default connect(
  (state) => ({
    dataType: getDataType(state),
  })
)(MyComponent);

Why?

One reason is to avoid duplicated data in Redux.

Redux state can be thought of like a database and selectors like SELECT queries to get useful data from the database. A good example is a filtered list.

If we have a list of items in Redux but want to show a filtered list of the items in the UI, we could filter the list, store it in Redux then pass it to the component. The problem is there are two copies of some of the items and it is easier for data to get out of sync. If we wanted to update an item, we'd have to update it in two places.

With selectors, a filterBy value would be stored in Redux and a selector would be used to compute the filtered list from the Redux state.

Why not perform the data transformations in the component?

Performing data transformations in the component makes them more coupled to the Redux state and less generic/reusable. Also, as Dan Abramov points out, it makes sense to keep selectors near reducers because they operate on the same state. If the state schema changes, it is easier to update the selectors than to update the components.

Performance

mapStateToProps gets called a lot so performing expensive calculations there is not good. This is where the reselect library comes in. Selectors created with reselects's createSelector will memoize to avoid unnecessary recalculations. Note if performance is not an issue, createSelector is not needed.

Update: in my experience, even more important than memoizing expensive calcuations in selectors is preserving referential equality of values returned by selectors that are passed as props to memo-wrapped function components or PureComponent derived class components to prevent re-rendering. See my post on React perfomance for more information.

References / See also


1. mapStateToProps can be considered a selector itself

git worktree notes

git worktree allows you to have multiple working directories associated with one git repo. It has been useful for me to look at, copy code between, and run two different branches of my code. I learned about git worktreee from James Ide's tweet:

Initial setup

Move the checked out git repository to a /main subdirectory

$ mv /my-project /main 
$ mkdir /my-project 
$ mv /main /my-project 

Create a new working tree from an existing branch

$ cd /my-project/main 
$ git worktree add ../my-branch-working-dir my-branch 
$ cd ../my-branch-working-dir 

Create a new branch and new working tree

$ cd /my-project/main 
$ git worktree add -b my-new-branch ../my-new-branch-working-dir origin/master 
$ cd ../my-new-branch-working-dir 

Delete a working tree

$ cd /my-project 
$ rm -rf my-branch-working-dir 
$ git worktree prune 

fatal: my-branch is already checked out error

Deleting .git/worktrees/my-branch fixed the problem for me. See https://stackoverflow.com/q/33296185/101911

Reference / See also

https://github.com/blog/2042-git-2-5-including-multiple-worktrees-and-triangular-workflows