Tuesday, July 28, 2020

Raspberry Pi set up for development

I had a really old Raspberry Pi sitting around that I set up for development yesterday. The Pi 2 doesn't come with WiFi by default, so I set up a WiFi Adapter so it would be easier to develop on my localhost and ssh as necessary.

Environment:
Host: macOS Catalina
Raspberry Pi 2 Model B
Raspberry Pi USB WiFi Adapter (I have this one)
  1. Install Raspberry Pi OS image on an SD card: Used Raspberry Pi Imager rbi-imager for macOS, chose Raspberry Pi OS Lite (32-bit) since I didn’t want the desktop environment - https://www.raspberrypi.org/documentation/installation/installing-images/
  2. Boot into the Raspberry Pi and login
  3. Confirm the OS version. Mine is "Raspbian GNU/Linux 10 (buster)"
  4. cat /etc/os-release
    
  5. Update settings for US. I did I1, then I4, then I3. I think I3 may be the only one I needed for my keyboard to recognize the double-quote key
    sudo raspi-config 
    Choose 4 Localisation Options 
    Choose I1 Change Locale => en_US.UTF-8 UTF-8 
    Choose I3 Change Keyboard Layout => Generic 101-key PC => Other => English (US) (Country of Origin) => English (US) (Keyboard layout) => The default for the keyboard layout => No compose key => 
    Choose I4 Change WLAN Country => US United States 
    sudo reboot
    
  6. Configure Raspberry Pi to connect to WiFi

    Backup file
    cp /etc/wpa_supplicant/wpa_supplicant.conf /etc/wpa_supplicant/wpa_supplicant.conf.orig 
    
    

    Add below to the end of the file. I had to add scan_ssid=1 because my SSID is not broadcasting.  NOTE:  If you have special characters in your password, e.g. double quote I didn't have to escape it but I had to add in key_mgmt=WPA-PSK config attribute (with no quotes).
    vi ~/wpa_supplicant.conf 
    
    network={ 
      ssid="<ssid>"
      psk="<secret password>"
      scan_ssid=1 
    } 
    

    Copy updated file back
    sudo cp ~/wpa_supplicant.conf /etc/wpa_supplicant/wpa_supplicant.conf 
    

    Verify if the “inet addr” is available on wlan0
    ifconfig wlan0 
    

    If not, reboot to connect
    sudo reboot 
    

    On reboot, I can see it says “My IP address is XXX.XXX.XX.XXXX”

    To verify internet connectivity perform a wget
    wget www.google.com
    
  7. Enable ssh on Raspberry Pi
    sudo raspi-config 
    Choose I5 Interfacing Options 
    Choose P2 SSH => Yes 
    [Didn’t need to reboot]
    
  8. Connect to Raspberry Pi via ssh from localhost, update macOS hosts file with Raspberry Pi IP
    sudo vi /private/etc/hosts 
    

    Added this to bottom of file
    # raspberry pi 
    XXX.XXX.XX.XXXX csrp
    

    ssh to the Raspberry Pi
    ssh pi@csrp
    
  9. Shutdown the Raspberry Pi
    sudo halt
    

Tuesday, June 30, 2020

MacBook Pro 2019 recovery after migration assistant

Fun!  I got a new macbook pro 2019 (that came with macOS Catalina) and decided to try out the Migration Assistant from my old macbook pro (which was on macOS High Sierra).  The migration ran successfully, but soon after I decided I would prefer to have a clean slate machine to start off from for various reasons (Applications, tools, and languages I commonly use have changed through the years, etc).

I ran through the instructions to reinstall macOS from macOS Recovery:
Try 1:
- Selected "Reinstall macOS"
- Took recommended option and chose "Reinstall the latest macOS that was installed on your Mac"
- Decided not to erase the disk
- Warning with "An Internet connection is required to install macOS"; I went back to the step to connect a Wi-Fi connection, which is available while in macOS Recover, and set up the Wi-Fi connection.  (It didn't remember the connection I had set up originally during the Migration Assistant, or the one I manually set up after I created the initial admin account)
- Agreed to the macOS Catalina terms of the software license agreement
- Selected the disk "Macintosh HD", unlocked it, and started the installation
  - The login screen came back with both my user accounts once during installation
- The login screen came back with both my user accounts after the installation was complete (I wasn't expecting that, was expecting for me to set up a new account again)
  - Checking the Applications, all the same applications were still usable <= Problem!
  - Checking non-Applications, still usable  <= Problem!
- Decided to run through the instructions again, this time erasing the disk

Try 2:
- Selected Disk Utility
- Clicked View -> Show All Devices
- Confirmed the container is using "APFS Container", named the disk the same name as the existing name "APPLE SSD .... Media" format type and erased the disk
- Quit Disk Utility
- Set up the Wi-Fi connection
- Selected "Reinstall macOS"
- Took recommended option and chose "Reinstall the latest macOS that was installed on your Mac"
- Agreed to the macOS Catalina terms of the software license agreement
- Selected the disk "APPLE SSD .... Media", unlocked it, and started the installation
- The "complete setup screen came up" as it did when I first opened the macbook.  Success!
- After setup, I did verify that the Applications and non-Applications from the migration were no longer usable.


Thursday, May 28, 2020

Ansible - accessing hostname defined in another group

I had a use case where the target server I was running the ansible playbook against needed to know the hostname of another group (defined in the hosts file) that I passed in as an environment variable named "target_env".

Environment:
CentOS Linux release 7.3.1611 (Core)
ansible 2.9.9

The hosts file had three groups defined, with only one hostname associated with each group:
[development]
cs-dev.cherryshoe.org ansible_connection=local

[test]
cs-test.cherryshoe.org

[production]
cs.cherryshoe.org

i.e. I need target server cs-test.cherryshoe.org to know the hostname of target_env production.  In this case the target_env "production" had hostname cs.cherryshoe.org associated to it.
ansible-playbook cherryshoe.yml -u YOUR_USER_ON_TARGET_SERVER -k -K -e 'target_server=cs-test.cherryshoe.org' 'target_env=production'

These ansible magic variables didn't work:
- debug: var=inventory_hostname - had the hostname where the script was being run against (in this example cs-test.cherryshoe.org)
- debug: var=ansible_hostname - had hostname localhost

There were two solutions to this problem:
1. Access it via hostvars magic variable
- debug: msg="{{hostvars[inventory_hostname]['groups'][target_env][0]}}"

This works because the hostvars magic variable holds the following type of information that you can see when you debug the variable
- debug: var=hostvars

The "groups" attribute inside hostvars[inventory_hostname] has the "target_env" production and if you access the first element in the array it's "cs.cherryshoe.org", which is what we want:
"groups": {
    "all": [
        "cs-dev.cherryshoe.org",
        "cs-test.cherryshoe.org",
        "cs.cherryshoe.org"
    ],
    "development": [
        "cs-dev.cherryshoe.org"
    ],
    "test": [
        "cs-test.cherryshoe.org"
    ],
    "production": [
        "cs.cherryshoe.org"
    ],
    "ungrouped": []
}

2.  Another way to do this was a hash variable in a group variables file,
hostnames:
  development: cs-dev.cherryshoe.org
  test: cs-test.cherryshoe.org
  production: cs.cherryshoe.org

and access it using:
- debug: msg=""{{hostnames[target_env]}}"

These articles were helpful:
https://stackoverflow.com/questions/30650454/how-can-i-add-keys-to-a-hash-variable-in-ansible-yaml
https://www.google.com/books/edition/Mastering_Ansible/nrkrDwAAQBAJ?hl=en&gbpv=1&dq=ansible+%22hash+variable%22&pg=PA37&printsec=frontcover

Tuesday, April 21, 2020

Setting up redis with Node.js

I recently setup redis for the application I'm working on.  Because database records could take multiple hours to complete, the user has access to past requests by a unique identifier and can get subsequent results in seconds vs. hours.

The redis server setup was done with an ansible yum module, which installed an older version 3.2.12, but it was adequate for my needs so I pinned it at that version.

Environment:
CentOS Linux release 7.3.1611 (Core)
node v8.16.0
redis server 3.2.12
redis 3.0.2 (client)

I verified with redis-cli that the redis server was installed appropriately:
redis-cli ping (should return PONG)
redis-cli --version (should return redis-cli 3.2.12)
keys * (gets all keys)
set hello world
get hello

I realized for my use case it would be advantageous to set an expire time/time to live for the keys.  You can practice this on command line first:
pttl hello (should return (integer) -1 since expire time/time to live wasn't set)
set foo bar ex 10 (expires in 10 seconds)
get foo (before expired)
pttl foo (should return a non-negative integer since expire time/time to live set)
get foo (after expired will return (nil))
del foo

Then I worked on client-side code to support this.  This is the client module that any other module can import and use.

cacheClient.js module
const redis = require("redis");
const client = redis.createClient();

// Node Redis currently doesn't natively support promises, however can wrap the methods with
// promises using the built-in Node.js
const { promisify } = require("util");
const getCacheAsync = promisify(client.get).bind(client);

const DEFAULT_TTL_SECONDS = 60*60*24*5; // 5 days time to live

/**
 * Client will emit error when encountering an error connecting to the Redis server or when any
 * other in Node Redis occurs.  This is the only event type that the tool asks you to provide a
 * listener for.
 */
client.on('error', function(error) {
  console.error(`Redis client error - ${error}`);
});

/**
 * Adds key with value string to cache, wiith number of seconds to live.
 * @param {String} key
 * @param {String} value
 * @param {int} seconds: default is 5 days
 */
function addToCache(key, value, seconds = DEFAULT_TTL_SECONDS) {
  console.log(`Cache add hash[${key}]`);
  // EX sets the specified expire time, in seconds.
  client.set(key, value, 'EX', seconds);
}

/**
 * Retrieves value with key.
 * @param {String} key
 * @returns {String}
 */
async function getFromCache(key) {
  const val = await getCacheAsync(key);
  if (val) {
    console.log(`Cache retrieve hash[${key}]`);
  }
  return val;
}

module.exports = {
  addToCache,
  getFromCache,
};

Then I tested locally.
1.  I made the max size of the cache very small (2MB), and I added two keys with two separate unique requests with a time to live of 5 days.  I then added the 3rd one, and I verified the older one was deleted to fit the new one
- backup /etc/redis.conf
- edit /etc/redis.conf maxmemory 2mb
- restart - sudo systemctl restart redis
- check maxmemory took by issuing the following in redis-cli
    - config get maxmemory
- check time to live for keys by issuing the following in redis-cli
   - pttl <key>
- add keys with requests
- check memory information by issuing the following in redis-cli
   - info memory
2.   I had several keys in the cache, and we went on spring break, by the time I got back to work 5 days had gone by and my keys had been expired and no longer in the cache

I then went back to custom configure with ansible what I needed for redis, which were the following redis.conf changes.
1.  Ensured /etc/redis dir exists in order to copy the default template /etc/redis.conf to it
2.  Copied template config to /etc/redis/redis.conf
3.  For the greatest level of data safety, run both persistence methods, so needed to enable append-only logs of all write operations performed by the redis server (AOF)
4.  Enable daemonize so redis server will keep running in the background
5.  Configure cache max memory (default out-of-the-box is no max)
6.  Configure cache eviction policy (default out-of-the-box is no eviction)
7.  Updated systemd to use custom config file and reloaded systemd

Number 5 and 6 together were key; if these two are not custom configured, it's possible to hit a RAM max and error out in your application!!

More commands that are useful:
info keyspace (summary of num keys, num keys with expire set, average ttl)
flushall (deletes all keys from existing database)
redis-cli --scan --pattern CS_* | xargs redis-cli del (flush only keys that start with)
redis-cli --scan | xargs -L1 redis-cli persist (make all keys persistent)
redis-cli --bigkeys (sample Redis keys looking for big keys)

Saturday, February 29, 2020

Cancel an in-progress web socket call

I'm working on a React application where a new web socket connection is created each time a long-running query request is made (and closes when the request completes).  This is done purposefully as the use case for the application doesn't require the connection to stay open constantly.  A problem that appeared was when a request was made, the user decided to change the input parameters and make another new request; sometimes the first web socket call returns first with the incorrect results displayed on the screen.

Environment:
React v16.2.0

The Bug
The bug was because the original web socket request was never closed prior to making another one.  Below is the original utility function that does all the web socket calls, where a new web socket client connection is instantiated each time:
wsUtils.js
/**
 * Sets up WebSocket connection, opens, processes message received, and updates the UI based on type and data returned.
 * @param {String} requestMessage - A string in JSON format: the parameters required to process the query
 * @param {function} updateHandler - Function executed when data is retrieved
 */
const connectWebSocket = (requestMessage, updateHandler) => {
  const socket = new WebSocket(`wss://cherryshoetech.com/wsapi/`);

  // socket event handlers ... 
  ...
  // end of socket event handlers ...
}

Possible Solutions
1. Have a websocket module manage the one web socket client connection// the one websocket client that can be instantiated at one time.  The websocket module contains the socket variable along with two functions to manage setting the socket and closing the socket.  The connectWebSocket utility function called will use these imported functions to close the socket each time a new connection is called, then set it after the connection is made.  The close socket event handler needs to handle when the event code was the user requested closure, and to not do anything when this happens.

websocket.js
let websocket = null;

/**
 * Retrieves the websocket
 */
const retrieveWebsocket = () => {
  return websocket;
};

/**
 * Saves the websocket
 * @param {Object} socket 
 */
const saveWebsocket = socket => {
  websocket = socket;
};

module.exports = {
  retrieveWebsocket,
  saveWebsocket,
};

wsUtils.js
import { retrieveWebsocket, saveWebsocket } from 'websocket';

const WEBSOCKET_USER_CLOSURE = 4002;

const connectWebSocket = (requestMessage, updateHandler) => {
  // close socket and reset if necessary.  This is to protect in the
  // scenarios where:
  // 1. User submits request before the prior request is finished.
  // 2. User navigates away from the page before the query is returned, canceling
  // the current query.  
  // If these scenarios are not accounted for, it's possible a "navigated away from" query's
  // response could come back and be displayed as the result, when a user put in a subsequent query.
  let socket = retrieveWebsocket();
  if (socket != null) {
    socket.close(WEBSOCKET_USER_CLOSURE, "User requested closure");
    saveWebsocket(null);
  }

  socket = new WebSocket(`wss://cherryshoetech.com/wsapi/`);
  saveWebsocket(socket);

  // socket event handlers ... 
  ...
  socket.onclose = event => {
    if (event.code === WEBSOCKET_NORMAL_CLOSURE) {
      ...
    } else if (event.code === WEBSOCKET_USER_CLOSURE) {
      // do nothing.  this is the event type called by managing component
      console.log(`WebSocket Closed (${event.reason})`);
    }
    else {
      ...
    }
  };
  ...
  // end of socket event handlers ...
}

Sunday, January 26, 2020

How to debug a Node.js app running on a VM from local Windows

I'm working on an application where the frontend is React and the backend is Node running on CentOS 7.  Below I list steps on how to debug a PM2 managed clustered Node backend application with two different clustering methods:
  1. Node cluster module is used to configure cluster processes
  2. PM2 cluster mode is used to configure cluster processes
Environment:
Windows 10 Pro
CentOS Linux release 7.3.1611 (Core)
node v8.16.0

Node cluster module is used to configure cluster processes
NOTE: In this example PM2 starts the Node application via a npm script, and the backend code is spawning two cluster instances.

Update the command where you start node with the --inspect attribute. For us, we start node with the "startdev" npm script located in package.json. "startdev" uses the "server" script.

OLD
"scripts": {
  ..
  "startdev": "concurrently \"npm run server\" \"npm run client\"",
  "server": "cross-env node --max-old-space-size=8192 ./bin/cherryshoeServer.js",
  ..
}

NEW - You'll see that the "server" script is not changed as other npm scripts are also dependent on it. "startdev" uses a new "serverdev" script that was created to add the --inspect attribute.
"scripts": {
  ..
  "server": "cross-env node --max-old-space-size=8192 ./bin/cherryshoeServer.js",
  "startdev": "concurrently \"npm run serverdev\" \"npm run client\"",
  "serverdev": "cross-env node --max-old-space-size=8192 --inspect ./bin/cherryshoeServer.js",
  ..
}

A PM2 ecosystem configuration file is used; the config options of note are:

apps : [ {
  ..
  script: 'npm',
  // call appropriate npm script from package.json
  args: 'run startdev',
  ..
} ]

Start node with command "pm2 start", which calls the npm script "startdev", which calls npm script "serverdev", and runs cherryshoeServer.js on the local development environment.

Perform a ps command to verify the backend node processes are running. When cherryshoeServer.js is started, it spawns two worker processes on the local development environment (based on code to spawn two processes using Node cluster module).  Because of this, you'll see the first process below is the parent process with PID 5281, and the remaining two are worker processes with parent PPID 5281.

cs_admin  5281  5263  0 11:09 ?        00:00:00 node --max-old-space-
size=8192 --inspect ./bin/cherryshoeServer.js
cs_admin  5298  5281  0 11:09 ?        00:00:00 /usr/bin/node --max-old-
space-size=8192 --inspect --inspect-port=9230 /opt/cherryshoe/bin/cherryshoeServer.js
cs_admin  5303  5281  0 11:09 ?        00:00:00 /usr/bin/node --max-old-
space-size=8192 --inspect --inspect-port=9231 /opt/cherryshoe/bin/cherryshoeServer.js

Verify in the log file that debugger processes are listening. PM2 is used to manage logging, which is located in /var/log/cherryshoe/pm2/cs-out.log.  By default, the debugger listens on port 9229, then each additional debugger listener for each worker processeis incremented appropriately (9230 and 9231 respectively).
2020-01-26T11:09:22.657: [0] Debugger listening on ws://127.0.0.1:9229
/62dd92b9-a978-4cce-9e91-84b87835e014
2020-01-26T11:09:22.657: [0] For help see https://nodejs.org/en/docs
/inspector
2020-01-26T11:09:22.882: [1]
2020-01-26T11:09:22.882: [1] > origin-destination-client@0.2.0 start /opt
/cherryshoe/client
2020-01-26T11:09:22.882: [1] > concurrently "yarn watch-css" "cross-env
NODE_PATH=src/ react-scripts start"
2020-01-26T11:09:22.882: [1]
2020-01-26T11:09:22.965: [0] Running 2 processes
2020-01-26T11:09:22.975: [0] Debugger listening on ws://127.0.0.1:9230
/3e05b482-b186-4c7c-908d-7f5188353bb2
2020-01-26T11:09:22.975: [0] For help see https://nodejs.org/en/docs
/inspector
2020-01-26T11:09:22.978: [0] Debugger listening on ws://127.0.0.1:9231
/e3264361-6c0e-4843-8a4d-91b5ba9a8e4f
2020-01-26T11:09:22.978: [0] For help see https://nodejs.org/en/docs
/inspector

Back on your Windows local machine, open up your favorite app to ssh tunnel (I'm using git bash for this example but I am a big fan of MobaXterm) into the VM with appropriate ports to attach to the ports that the debuggers are listening on. This starts a ssh tunnel session where a connection to ports 8889-8891 (make sure these ports are not in use first) on your local machine will be forwarded to port 9229-9231 on the cherryshoe-dev.cherryshoe.com machine.  NOTES: I had to use 127.0.0.1 instead of localhost for this to work. Use a user account that has access to the VM. You may want to set up ssh passwordlessly so you don't have to enter passwords.
    ssh -L 8889:127.0.0.1:9229 cs_admin@cherryshoe-dev.cherryshoe.com
    ssh -L 8890:127.0.0.1:9230 cs_admin@cherryshoe-dev.cherryshoe.com
    ssh -L 8891:127.0.0.1:9231 cs_admin@cherryshoe-dev.cherryshoe.com

You can now attach a debugger client of choice to the X processes, as if the Node.js application was running locally. I will use Chrome DevTools as an example.

Open Chrome and enter "chrome://inspect" into the URL

Click "Discover network targets" Configure button and configure each of the ports to attach to:
  • Enter "127.0.0.1:8889"
  • Enter "127.0.0.1:8890"
  • Enter "127.0.0.1:8891"


You should now see the 3 processes attached in the debugger client:


Click the "inspect" link for one of the processes, this will open up the DevTools for Node. Under "Sources" tab you can click "Ctrl-P" to open up a file of choice to debug that is attached to the process. You do NOT need to 'Add folder to workspace'.

Open up each remaining process by clicking the "inspect" link.

Invoke the Node application, i.e. call a REST endpoint that it responds to
One of the worker processes will process the request. Any breakpoints will be reached and any console logs will be printed.

You have to open a window for each worker process running because you don't know which process will get picked and if you don't have them all open you can miss it and think debugging isn't working!

If you restart the Node backend, the DevTools Targets will recognize the new process IDs, but the DevTools windows won't. Therefore, you need to open up the DevTools windows again for each process via the "inspect" link.


PM2 cluster mode is used to configure cluster processes
NOTE: It was discovered that PM2 cluster mode and starting node via npm do not play nicely. The first process would listen on port X, but each subsequent process would error out saying port X was in use. Because of this, the node application was changed to start directly by invoking the appropriate js script.


Monday, December 23, 2019

Using jest-when to support parameterized mock return values

The jest library by itself currently doesn't support parameterized mock return values for Node.js unit tests, but it can by integrating the jest-when library.

Environment:
CentOS Linux release 7.3.1611 (Core)
node v8.16.0
jest 24.8.0
jest-when version 2.7.0

The below jest unit test example uses jest-when to have two different parameterized mock return values for a method (db.runQuery) that is called twice but has different return values to simulate two different database calls.

const runDbQuery = async(sql, params) => {
  // this is the external dependency call that needs to be mocked
  const dataSet = await db.runQuery(sql, params);
  return dataSet;
};

const retrieveData = async(params) => {
  const data1 = await runDbQuery(QUERY1, [params]);
  const data2 = await runDbQuery(QUERY2, [data1]);

  // some manipulation of data...
  const returnData = <some manipulation of data>;
  return returnData;
};


Unit test for unit:
const { when } = require('jest-when')

// import external dependencies for mocking
const db = require('./db');

const unit = require(<unit under test js file>)(db);

test('Test retrieveData', async () => {

  ///////////////////
  // db call 1 setup
  ///////////////////
  // first db.runQuery mock setup
  const params1_var1 = 5;
  const params1_var2 = "five";

  const paramsJson1 = JSON.stringify({
    params1_var1: params1_var1,
    params1_var2: params1_var2,
  });

  const params1 = [paramsJson1];

  const returnData1_var1 = 99;
  const returnData1_var2 = "ninety-nine";
  const returnData1_var3 = true;

  const returnData1 = [
  {
    returnData1_var1: returnData1_var1,
    returnData1_var2: returnData1_var2,
    returnData1_var3: returnData1_var3,
  }

  ];

  // format returned from db call
  const dataSets1 = {
    rows: returnData1,
  };

  const query1 = QUERY1;

  ///////////////////
  // db call 2 setup
  ///////////////////
  // second db.runQuery mock setup
  const params2_var1 = 22;

  const params2 = [params2_var1];

  // county query is different
  const query2 = QUERY2;

  const returnData2_var1 = 100;
  const returnData2_var2 = "one-hundred";

  const returnData2 = [
  {
    returnData2_var1: returnData2_var1,
    returnData2_var2: returnData2_var2
  }

  // format returned from db call
  const dataSets2 = {
    rows: returnData2,
  };

  / external dependency method call that needs to be mocked
  const mockDbRunQuery = db.runQuery = jest.fn().mockName('mockDbRunQuery');

  // first call to db.runQuery
  when(db.runQuery).calledWith(query1, params1).mockResolvedValue(dataSets1);

  // second call to db.runQuery
  when(db.runQuery).calledWith(query2, params2).mockResolvedValue(dataSets2);

  const retrieveDataReturnVal = {
  ...
  };

  await expect(unit.retrieveData(params)).resolves.toStrictEqual(retrieveDataReturnVal);

  // verify that mock method(s) were expected to be called
  expect(mockDbRunQuery).toHaveBeenCalledTimes(2);
});