0%

TL;DR

Font family name (Name Id 1 in name table of the font) in a OpenType font file should not be used as the font-family property in CSS if the font is not one of regular, italic, bold, and bold italic style. Instead, the Typographic Family Name or Preferred Font Family (Name ID 16 in name table of the font) should be used if including them in CSS.

Background and Problem

Recently, I was tasked to investigate an interesting problem found in our custom font implementation. Font is something I have no prior knowledge about and this investigation really helped me learn a lot more about font and by writing it down, I hope I can help developers who may face similar problems.

Our system allows user to upload any font as long as they have the permission to use the font commercially. Recently, we realized that some of the fonts uploaded by user were not displayed correctly. More specifically the font-family attribute of these said fonts were incorrectly extracted.

As an example, an user uploaded a font called fontA-SemiBold, the extracted font-family name from the font was fontA-SemiBold whereas the correct name of for the font-family should be fontA. And in this case. The incorrect font name resulted in incorrect rendering in the front-end as the CSS rendering is suppose to have the correct font-family name.

Investigation

After looking at several of the uploaded fonts and going through the code, I decided it is necessary to create a small POC using the library we chose in our application. I was able to reproduce the issue that we face with the user uploaded font – the library yield an incorrect font-family name. This leads me to think whether the problem lies within the library itself. So I immediately used another font I found online. To my surprise, this time the library was able to extract the correct font-family.

Having done the POC, and finished this small experiment, I’m a bit at lost. How come the library sometimes work and sometimes failed to work?

Later, I realized this is because the font I chose happen to be Roboto-Bold.

Findings and learnings

To dig deeper into this, I went through the OpenType font specification on Microsoft website. The relevant part for the documentation is the name table for a font, so I read through all of it and here’s what I found and learnt:

  • A font file contains a name table that stores the information about the font, this might include license for the font, font family name, urls etc. The library we used use this specific table to extract information about a font.
  • A font family name (name ID 1 in the name table) for a font can only be shared among at most four basic styles of font faces – they are regular, italic, bold, and bold italic. That is why the library returns the correct font family name for the Roboto-Bold I chose.
  • For styles that fall outside of these four styles, such as Semi-Bold or Black, they are treated as a separate font family and will have a separate font family name. For example, the font family name for Roboto-Black will be Roboto-Black rather than Roboto.
  • A Typographic Family Name (or Preferred Family) – Name Id 16 in the name table has no constraints in the number of font faces, but is only present when the font face falls outside of the basic four styles.
  • In CSS, the font-family property is expected to be the shared font name. For example, if using Roboto-Black font in CSS, the font-family property should be Roboto rather than Roboto-Black
  • The difference between the font family name definition between CSS and the OpenType means that it is best to always use the Typographic Family Name if it presents rather than the Font Family name if the HTML/CSS is generated on the fly.

Solution

As we have a better understanding on the font family specification now, we need to find ways to retrieve it from the font table.

Fontkit was the library of choice and therefore I will only discuss how to get the correct font family name in fontkit. Unfortunately, fontkit does not have a direct way of accessing the Typographic Family Name. Through reading its source code, I found the way to go is to use an undocumented API – font.getName('preferredName'), note that the preferredName is the same as Typographic Family Name, and it corresponds to name ID 16 in the font name table.

Bonus: Using fonttools to dump and inspect font tables

Out of curiosity, and to verify Microsoft’s documentation I found a tool called fonttools which can be used to dump tables in a ttf/otf file to ttx format. This format is essentially an XML, you can open it with any text editor and inspect the content.

Installation of this tool can be done through pip: pip install --user fonttools

Once installation’s finished, you can dump all the tables for a font file using ttx <fontfileName>. Note that this will result in an very large XML – roughly 120k lines.

As we are only interested in the name table, the following command can be used to dump that table only: ttx -t name <fontFileName>

Here’s an example of the dump:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
<?xml version="1.0" encoding="UTF-8"?>
<ttFont sfntVersion="\x00\x01\x00\x00" ttLibVersion="4.26">

<name>
<namerecord nameID="0" platformID="3" platEncID="1" langID="0x409">
Copyright 2011 Google Inc. All Rights Reserved.
</namerecord>
<namerecord nameID="1" platformID="3" platEncID="1" langID="0x409">
Roboto Black
</namerecord>
<namerecord nameID="2" platformID="3" platEncID="1" langID="0x409">
Regular
</namerecord>
<namerecord nameID="3" platformID="3" platEncID="1" langID="0x409">
Roboto Black
</namerecord>
<namerecord nameID="4" platformID="3" platEncID="1" langID="0x409">
Roboto Black
</namerecord>
<namerecord nameID="5" platformID="3" platEncID="1" langID="0x409">
Version 2.137; 2017
</namerecord>
<namerecord nameID="6" platformID="3" platEncID="1" langID="0x409">
Roboto-Black
</namerecord>
<namerecord nameID="7" platformID="3" platEncID="1" langID="0x409">
Roboto is a trademark of Google.
</namerecord>
<namerecord nameID="9" platformID="3" platEncID="1" langID="0x409">
Google
</namerecord>
<namerecord nameID="11" platformID="3" platEncID="1" langID="0x409">
Google.com
</namerecord>
<namerecord nameID="12" platformID="3" platEncID="1" langID="0x409">
Christian Robertson
</namerecord>
<namerecord nameID="13" platformID="3" platEncID="1" langID="0x409">
Licensed under the Apache License, Version 2.0
</namerecord>
<namerecord nameID="14" platformID="3" platEncID="1" langID="0x409">
http://www.apache.org/licenses/LICENSE-2.0
</namerecord>
<namerecord nameID="16" platformID="3" platEncID="1" langID="0x409">
Roboto
</namerecord>
<namerecord nameID="17" platformID="3" platEncID="1" langID="0x409">
Black
</namerecord>
</name>

</ttFont>

References and further readings

https://docs.microsoft.com/en-us/typography/opentype/spec/name
https://github.com/foliojs/fontkit
https://github.com/fonttools/fonttools

I’ve always considered myself to be an advanced computer user, but I’m not a Windows person, at least not when I’m coding. So when I got a Windows 10 laptop as a daily work machine, I’m beyond disappointed. Luckily, there’s always a way around – VirtualBox it is. The laptop I got was powerful enough that I was able to allocate 16GB of memory as well as 3 cores of the host machine to the virtual machine.

Once I had my vm setup, I always use Windows Terminal and SSH into the machine for development. One day, it occurs to me I can (and should) automate all of this.

Problem

Automatically start the selected VirtualBox virtual machine in headless mode (this can save a bit of resources), wait for the machine to boot, and then SSH into the VM using Windows Terminal with selected port forwarding on the host and virtual machine.

Solution

This is not something that can be achieved in one go, therefore I’m going to breakdown by components:

  • Virtual machine – Assuming Systemd based Linux distro
    • We need to disable the graphical login interface for the VM. To achieve this, we can do systemctl set-default multi-user.target . If we need to revert back to graphical login, we can do systemctl set-default graphical.target. The multi-user.target and graphical.target are equivalent to what was traditionally known as run levels in SystemV.
    • We also need to setup the necessary SSH access from the Windows 10 host machine to the virtual machine. I won’t cover that here. Only thing to keep in mind is to open the necessary ports.
  • Host machine – need to setup batch script, Windows Terminal profile and Window startup
    • Batch script
      • What we need to achieve in the batch script is start the machine and wait for it to boot. Fortunately, virtualbox installation comes with something that can achieve this, it is called VBoxmanage.exe and is in the VirtualBox installation folder
      • To start the virtual machine, the command will be "C:\Program Files\VirtualBox\VboxManage.exe" startvm [VM name] --type headless. Change the command with your VM name, the --type headless means that no GUI of the virtual machine will be started at all.
      • Next step is to wait for the machine to start up, the command for this will be "C:\Program Files\VirtualBox\VboxManage.exe" wait "[VM name]" "/VirtualBox/GuestInfo/OS/NoLoggedInUsers"
      • Last step for the batch script will be wt, which represents windows terminal.
      • To put it together, the batch script will be
        1
        2
        3
        4
        @echo off
        "C:\Program Files\VirtualBox\VboxManage.exe" startvm [VM name] --type headless
        "C:\Program Files\VirtualBox\VboxManage.exe" wait "[VM name]" "/VirtualBox/GuestInfo/OS/NoLoggedInUsers"
        wt
    • Windows terminal profile
      • Create a profile in Windows terminal, this can be done from either the UI or the json setting file.
      • Here, I will present the profile I use personally:
        1
        2
        3
        4
        5
        6
        7
        {
        "bellStyle": "visual",
        "colorScheme": "Tango Dark",
        "commandline": "ssh -R 5432:localhost:5432 -L 8080:[::1]:8080 -L 3000:[::1]:3000 -t username@vm_ip_address "exec zsh -l"",
        "name": "VM",
        "scrollbarState": "hidden"
        }
      • It is obvious the commandline part is a bit bloated cause I was trying to do a bit too much in one go. Actually, it is a lot cleaner to put the command in a batch file and this line can be replaced by the path to that specific batch file as well.
    • Windows Auto Start
      • Press Windows+R and type shell:startup, this will bring up a folder called “StartUp” where you can put a bunch of things that you want to auto start when Windows starts.
      • Drag the Batch script created into this folder to create a shortcut.

After these setup, Windows will now start the virtual machine every time you start your computer.

Bonus: shutdown VM in one click

The same aforementioned VboxManage.exe can be used to poweroff the virtual machine as well. You can achieve that by creating a batch file with the following content:

1
2
@echo off
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" controlvm DEV_MACHINE poweroff

You can also halt or save the state of the VM rather than poweroff. More information on this can be found in the virtualbox manual.

Further readings

VirualBox manual on VboxManage
Systemd manual on special units

Though this is technically a tech blog where I usually put my thoughts and learnings regarding technologies, I still would like to share some of my experience with taking the IELTS exam which is about a month ago.

Please note that the author is in no way affiliate with the websites and/or services mentioned below, use of these software and/or services is at the sole discretion of the reader

Why I choose to take IELTS

To be honest, my current status does not require me to take the IELTS exam. Though after some consideration, I have decided to take the exam and see if I can benefit from extra immigration points by taking the IELTS exam. My goal is reaching 8 in listening and 7 in all other sections. I made the decision to take the exam on the night of May 22nd, 2021. I took the exam on June 13th, 2021. The preparation time is around 21 days.

Preparation

I have a day job, so my preparation time on working days was quite limited. I usually get home at around 6pm. My dinner and routine exercise finishes at 8pm. So my preparation for IELTS starts at 8:15pm to 8:30pm.

The list of tools and/or books I used are listed below:

  • Road to IELTS
    This is an comprehensive tool that can be used to prepare for both the academic and the general training version of the exam. It was offered for free at my local public library. I’m very greatful to my local library for offering such an invaluable service, it helped me quite a lot.
    As for using this tool, different people may have different ideas. I used this mainly as a training tool to familiarize myself with the listening and reading part on computer-based IELTS exam. In this system, there are 10 mock exams for each part of the IELTS exam which is what I used most for listening and reading. I alternate between Reading and Listening on each working day, spending around 1 to 1.5 hours on this.
    There are mini practice sections for each parts as well, this was mainly to educate test takers on different kind of formats for the listening and speaking test. As this is not my first time taking the IELTS exam, I skip most of them. But I can see how they are helpful for those who are taking IELTS for the first time.
    The difficulty level on the mock exams are actually a bit higher than the actual exam, it could be because the mocked test delivery system is not smart enough and misses out correct answer sometimes. As a reference, I usually get around 33 correct in the Listening section, which translates to about 6.5. I got 9 in the actual IELTS exam for the listening. For the reading part, I usually get around 32 correct which is also 6.5, while in my actual IELTS test, I got 8.5.
    For writing and speaking, it is also practical to use the mocked exams, the only caveate is that no one will grade thsoe. I would suggest at least read through the sample exam essays provided, and try to follow the structure and learn some good words from the essay. For speaking, I don’t feel like you can improve your speaking skills in a short time, but the mock exams offered is still helpful.

  • Writing 9
    This is a paid service that’s offered at 12 USD per month. I found it quite helpful, cause you can actually mock a writing exam with it. I did around 3 to 4 essays on it and get around 6.5 each. The thing I like about this tool is that it can spot your spelling mistakes among other things.

  • Simon’s writing course
    This is one of the courses that I definitely recommend watching. This course had helped me a lot in achieving the score of 7 on writing. To summarize on what I had learnt from this course: IELTS writing is not only about good use of words, but also about organizing the structure of the essay logically. I do think the latter what I used to ignore. In my first IELTS exam taken at 2011 (at that time, I took the Acacdemic version) I only got 5.5 in IELTS writing, and 10 years later, I got 7 for the general training version.

  • IELTS Band 9 Vocab Secrets by Caambridge Consultants
    This book contains loads of good examples about usage of good words in either reading and writing. I highly recommend reading through the whole book at least once and practice using the word examples given in the book.

Exam tips

For me, this is my first time taking the computer-based IELTS exam. I would like to share some tips that I think other candidates may find useful:

  • Listening
    • From my experience on taking both the mocked and the actual exam, Computer-based listening test is a bit more difficult than paper-based ones. There are several reasons for this:
      • In computer based exam, you will not be given time to transfer your answers from the exam paper to the answer sheet. Instead, you will only be given 2 minutes to check for all the answers at the end of the listening test. This means that you may not have enough time to check for some easily identifiable mistakes like typos, singular/plural forms or tense problems. So when you practice, keep this in mind, and try to avoid them as much as possible.
      • You will have to type and listen and the same time in computer based exam. I guess this is quite obvious, it might not be a big problem but keep in mind that in the IELTS exam, you are under considerable stress, for non-native speakers, it may be a bit difficult.
    • Highlighting is important in listening test. For me, this is how I achieved Band 9 in listening – I highlight things I think are important. In computer based exams, highlighting will help you focus on the keywords and in my experience, I feel more confident when I highlight things.
  • Reading:
    • Highlighting, again. Highlighting is also important in reading. I’m glad that the testing system provides a way to highlight in reading, it is really helpful and mimics the paper exam well.
    • Try to use the color settings to avoid eye strain. The computer delivered IELTS exam have settings to change the background color of the exam. To me, this is really helpful. I changed the background color from white to a bit more yellowish. This causes less eye strain and helped me focus on the exam better.

This week, while setting up local project for work, I encountered some weired issue during the unit test and this has something to do with postgres and its default settings under Windows and other OS.

Problem

As an example, consider the following array: [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ]

Sorting this in JavaScript results in case sensitive result, where upper case always come first:

1
2
>>> [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ].sort()
[ "A", "CD", "Capacitor", "D", "a", "b", "c", "d" ]

Sorting this in Postgres SQL with default installation will yield a case insensitive sorting where upper and lower case are mixed:

1
2
3
4
5
6
7
8
9
10
11
12
SELECT regexp_split_to_table('D d a A c b CD Capacitor', ' ') ORDER BY 1;

regexp_split_to_table
-----------------------
a
A
b
c
Capacitor
CD
d
D

The goal here is to make sorting consistent, so we can either fix the Postgres side or fix the JavaScript side.

Investigation

Looking carefully at each of these results, it is not difficult to realize that the sorting in JavaScript is based on ASCII, i.e. character A has an ASCII code of 32 where as a has ASCII code of 97, so A comes first. This IS NOT the proper way to do string sorting in JavaScript.

For the result that Postgres gave, it is a bit more complex. Postgres uses LC_COLLATE to determine the sort order of the array.This variable comes from the system and different OS have different implementation, when using locale C or POSIX, strings are sorted according to their ASCII value, any other locales will result in a case insensitive result.

Solution

Here comes the solution part, as mentioned earlier, we can either fix the JavaScript or fix the Postgres, so I’ll present the solutions separately.

There are several ways to make the sorting case insensitive in JavaScript:

  • [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ].sort((a,b) => a.localeCompare(b))
    This uses localeCompare from string prototype which have some performance implication on larger array.

  • [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ].sort(new Intl.Collator('en_us').compare)
    This is the recommended by MDN to sort larger arrays.

For Postgres, the first thing that needs to be noted is that Postgres recommends against using locales if it can be avoided, from their documentation:

The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.

The one-off way to fix the sorting is by specifiying the LC_COLLATE value when creating the database, for example:

1
2
3
4
5
CREATE DATABASE db 
WITH TEMPLATE = template0
ENCODING = 'UTF8'
LC_COLLATE = 'C'
LC_CTYPE = 'C';

The created database (db in this case) will use C as LC_COLLATE overriding the default LC_COLLATE value from the OS. With the new database created, you easily verify it will sort in a case sensitive way by the ASCII value once you connect to the database and run the query presented previously.

This one-off way is good enough only if you care about creating such database once. Imaging next time you create a new database, you will still have to manually override the LC_COLLATE value. So the way to go is to modify the template database, because LC_COLLATE can’t be changed once the database has been created, we will have to create a new database and set it as template.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

--- Unset template1 as template before dropping
UPDATE pg_database
SET datistemplate='false'
WHERE datname='template1';

--- Create a new database that uses C as locale
CREATE DATABASE template1
IS_TEMPLATE true
ENCODING = 'UTF8'
LC_COLLATE = 'C'
LC_CTYPE = 'C'
CONNECTION LIMIT = -1
TEMPLATE template0;

--- Create a new databse now should have C as locale.
CREATE DATABASE db3;

--- Should return C
SHOW lc_collate;

Another way to do this, is to initialize the database cluster with C locale as below:

1
2
chown -R postgres:postgres /var/lib/postgres/
su - postgres -c "initdb --locale C -D '/var/lib/postgres/data'"

this will create template database with C locale.

Takeaways

String sorting with the default method in JavaScript and most other languages is merely a comparsion based on the ASCII code, this results in upper case letters always comes first.

String sorting in PostgreSQL depends on the LC_COLLATE setting of the table which depends on the setting of the operating system, default sorting will yield results that mixes upper case and lower case, in other words, sorting is not case sensitive. There are many ways to get case sensitive sorting, but the most reliable way should be specifiying the LC_COLLATE when creating the database.

References

⚠ This article requires Rooted Android phones, please follow the instructions with caution.

Background

Since Android 5, the Android system have added the capability of multi-user, the aim is most likely to make sharing devices easier between different users in the family. Later on, Google added again something called profile, this allows enterprise to manage devices their employers use. In this article, I am going to explore some other possibilities enabled by multiuser/profile functionality. More specifically I’m going to focus on the restricted user profile.

Problem to Solve

There are several problems I’m aiming to solve with the restricted profile:

  • I want to focus on things that actually matter rather than spending too much time on my phone, I would like my phone to be without any apps that can disturb or distract me when I want to focus. But I don’t want to uninstall certain apps, as they are necessary in order to keep in touch with my family and friends.
  • I want to install certain apps that can track me into a separate space, a space where it is impossible for them to track me using the microphone, camera, or location.

Solutions

Well, as an Android user that also owns an iOS device, I have to say Apple have done a good job in many of the things I mentioned above. For example, my first problem can be solved in the iOS setting natively. Android does have something similar called digital wellbeing, but since it is from Google, I’m not that comfortable installing it on to my phone.

Anyway, I feel like the restricted profile is still a good solution to my problem on Android.

How to?

So in Android shell, there’s something called pm. This command can be used to create a new user.
Here is part of the help doc that I’m interested in:

create-user [--profileOf USER_ID] [--managed] [--restricted] [--ephemeral] [--guest] USER_NAME Create a new user with the given USER_NAME, printing the new user identifier of the user.

That is, command pm create-user can be used to create user on your Android phone. Assuming you have access to Android adb, issuing
adb shell pm create-user --restricted test will create a user named test with restricted profile, the command will show the userid of the newly created user, will be referenced as $userId in the following.

Once the user has been created, you can manage the list of the users it will have access to in the settings, this is in Settings/Multi-user.

To me, I want impose more restrictions on the user, there is indeed a way to do so:

adb shell su -c 'pm set-user-restriction --user USER_ID no_install_apps 1' – this will disallow install apps for the newly created profile, in this case no_install_apps is the name of the restriction, while the ** 1** immidiately follow is a boolean representing whether the restriction is enabled. (This command assumes you use Magsik as su)

You can find a list of restrictions available from Google’s official document. Take note of the Constant Value in the document, that’s the restriction value you need to use.

If you want to see what restriction you’ve added, you can issue adb shell dumpsys user, this command will dump all the users on the current device and the restrictions if any.

To switch to a different user, you can use am switch-user USER_ID or do so from the UI.

The easiest way to delete an user is through the settings menu. Though you can also do so through the command: pm remove-user USER_ID

Caveat

There are several issues that need to be taken into consideration when using a restricted profile:

  1. You will not have access to any of the files in the primary account, this is a big issue if you have social media apps installed on this account and want to upload image from the primary account.
  2. You will not have Google account access or root access. Most people will be okay with this, but it is a bit troublesome at times.
  3. Remember to switch the profile back, when you no longer need to be in the restricted one. For example, if you leave your restricted profile on overnight, you will not have an alarm clock if you don’t set it up correctly.

This week I encountered some issues with Terraform (and, well, Kubernetes) again. This time, the problem was way more interesting than I thought.

Problem

When deploying to Kubernetes, I got dial tcp 127.0.0.1:80: connect: connection refused, connection reset error.

The more specific error message I got is

Error: Get “http://localhost/apis/apps/v1/namespaces/default/deployments/xxx": dial tcp 127.0.0.1:80: connect: connection refused

As this error happened in our deployment pipeline (we use Terraform to deploy stuff to Kubernetes), my natural thought was that this can be solved easily with a retry. So I retried the deployment right away, and it still failed.

When I finally stopped what I was working on and start to examine the message carefully, I realized it is quite strange: how come the pipeline (or the Kubectl for that matter) trying to connect to localhost when it is meant to connect to a Kubernetes cluster located somewhere else?

As you will see from my solution, this message was not helpful at all and in some sense quite misleading to someone who is trying to debug.

After comparing the log from a previous successful deployment and the said failed deployment. I realized the issue was with the Kubernetes provider for Terraform: while in the successful build, the terraform init command yield something like Installing hashicorp/kubernetes v1.13.3..., in the failed build the same command yield something like Installing hashicorp/kubernetes v2.0.2....

It is quite obvious that this issue was caused by breaking changes in the Terraform provider. According to their changelog, there were several breaking changes in the 2.0.0 version, among them were these two:

Remove load_config_file attribute from provider block (#1052)
Remove default of ~/.kube/config for config_path (#1052)

In our deployment Terraform, we set load_config_file to true to load the kube_config file from the default config_path of ~/.kube/config. Due to the breaking changes quoted above, neither the load_config_file nor the default config_path existed any more, and when Kubernetes can not find these two files, it will try to connect to the 127.0.0.1 (aka localhost) as a fallback which caused the connection refused error.

Solution

There are two kind of solutions to this issue:

  • Updating the Terraform code so it is compatible with the 2.0.0 version of the Kubernetes provider
    OR
  • Downgrade to the last working version of the Kubernetes provider and keep the existing Terraform code

Due to the urgency of getting the pipeline and deployment back online, I chose the downgrading route. Essentially, I’m adding the version constraint to the Kubernetes provider that was previously missing:

1
2
3
4
kubernetes = {
source = "registry.terraform.io/hashicorp/kubernetes"
version = "~> 1.0"
}

Adding in the version constraint means that Terraform will only increase the rightmost version number, therefore it will not be able to upgrade to version 2.0.0 automatically and can avoid this specific problem that was caused by breaking changes.

Takeaways

On debugging:

  • Generally speaking, if you found your Terraform changed behavior without you making any changes, you could be making the same mistake as I did: not specifying the version constraint for the provider. You can find some clues in your terraform init command, for example, by comparing if the same provider version was used on the two builds where one was successful, the other failed.
  • Personally, I was never familiar enough with Kubernetes to know that the default behavior of Kubectl is to use 127.0.0.1 when there’s no config file present. Now that I came across this gotcha, I do realize this kind of behavior was not that uncommon per se: Knex which is the library we used for node.js also have similar behavior, and I will keep this in mind if I encounter something similar in the future.

On Terraform:

  • When there’s no version constraint specified, Terraform will always use the latest provider version. Therefore, it is important to specify the version constraint. It is recommended by Terraform to always use a specific version when using third party modules. For more information on specifying the version constraint, read the documentation from their website.

Recently, I started to build an application with Go, it is a quite simple application that does something very basic and then sends an notification to a telegram bot. It’s quite obvious to me this kind of application is quite suitable to run as Lambda, and that’s where I decided to deploy my application to once it is working well locally. It turned out that I had to solve several issues I encountered. Here I share how I solved those issues, so you don’t have to scratch your head when encounter them.

Attempt 1: Deploy the application through the web interface.

For my first attempt on deploying the application, my goal is to make things as simple as possible.
Therefore, I chose to use the web interface. From the web interface, there’s an option to upload zip file and that’s where I began.

Problem: Compile GO in an static way

This problem happens quite often from what I see on the internet. The main issue with here is that, some of the libraries in GO uses a feature called CGO which means using C code in GO, and when this feature is in use, GO compiler will try to create dynamic binary.

To solve this problem, it is often as simple as compiled the code to a statically linked binary. Do note that, some of the code that’s compiling use GCC was not working, this is because often times the GLIBC library is higher than the ones used in AWS lambda environment, at least that was the case for me (I am on an Linux laptop with Manjaro Linux).

I was able to find something called musl-gcc and then used it in my compilation

1
2
build:
CC="musl-gcc" go build --ldflags '-linkmode external -extldflags "-static"' ./main.go

This proves to be working fine, once I complied the binary, zip and upload it to lambda through the interface, everything seems to be working.

Attempt 2: Deploy the application through AWS SAM

Often, it is not efficient to manually upload the code using a zip file through every time, that’s why I started to thinking about introducing SAM as a tool to simplified the process of deployment. This was when I encountered the second issue.

Problem: Asking SAM to compile the GO program in an static way.

As the line above says, SAM always compiled the code in a dynamic way which is why the binary fails to work again even locally using the command sam invoke local.

Now it’s the time to tell SAM I don’t want dynamically linked binaries. As a matter of fact, none of the article available online has a direct answer to my question, fortunately, I did find an AWS documentation on using custom runtime. Based on this article, a GO program that wants to utilize static linking can have the following template:

1
2
3
4
5
6
7
8
9
10
11
Resources:
HelloGOFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: HelloGO
Handler: main
Runtime: go1.x
MemorySize: 512
CodeUri: .
Metadata:
BuildMethod: makefile

And in the MakeFile, the following corresponding entries need to be added:

1
2
3
build-HelloGOFunction:
CC="musl-gcc" go build --ldflags '-linkmode external -extldflags "-static"' ./main.go
cp ./main $(ARTIFACTS_DIR)

Doing so will make sure compiled binary are statically linked and works on lambda when bundle and uploaded.

Notes:

  • WRT1200AC (this router) contains 2 partitions
  • Flashing the firmware through the Luci interface actually flashes the new firmware to the inactive partition

Steps:

  • Download the bin file needed to upgrade the package
  • Create a list of all the installed packages on current version using opkg list-installed | cut -f 1 -d ' ' > /root/installed_packages.txt
  • Choose one of the following methods to flash:
    • Flash the file from the Luci interface
      OR
    • Download the file to the /tmp and then flash using sysupgrade /tmp/*.bin
  • After the flash and reboot, you will be in the partition that you wasn’t on before the flash, it will have all of your previous configs, but the extroot will not be there.
  • Hopefully, you will already have internet access at this point, if not, go ahead and setup internet.
  • Once your internet is up, you will need to run some commands to install packages needed to setup:
    • First, install packages that are necessary to setup extroot:
      opkg update && opkg install block-mount kmod-fs-ext4 kmod-usb-storage e2fsprogs kmod-usb-ohci kmod-usb-uhci fdisk
    • In my case I use f2fs for my extroot, which means I need extra packages, like mkf2fs to format the flash.
    • Now, format the device you plan to use for extroot, in my case, I ran mkf2fs /dev/sda1 cause the sda2 was used as swap.
    • At this point, copy the overlay to the newly formatted drive
      mkdir -p /tmp/introot mkdir -p /tmp/extroot mount --bind / /tmp/introot mount /dev/sda1 /tmp/extroot tar -C /tmp/introot -cvf - . | tar -C /tmp/extroot -xf - umount /tmp/introot umount /tmp/extroot
    • Regenerate fstab using block detect > /etc/config/fstab;
    • Reboot
    • You should have a working Openwrt with extroot now. Change opkg/distfeeds.conf to the corresponding upgraded version.
    • Now run opkg upgrade $(opkg list-upgradable | awk '($1 !~ "^kmod|Multiple") {print $1}') to keep base packages up-to-date.
    • And install all your backed up packages using cat /root/installed_packages.txt|xargs opkg install

Because I don’t use dnsmasq, this means once the steps above finishes, I will need to do some extra post installation steps

Post installation (More as a personal note):

  • Remove odhcpd-ipv6only package and install odhcpd, this will ensure IPv4 dhcp functionality, otherwise, there will only be ipv6 addresses allocated.

Like it or not, 2020 has been a year where video conference are used a lot. For me, most of the meetings happens in Zoom. Finding the link to the meeting in the calendar and then clicking on it to join the meeting had gradually become a new norm and something I really don’t like (the fact that after clicking on the Zoom link brings up your browser instead of Zoom itself, and prompting you to click again to open Zoom is really a pain). As someone who would like to automate things as much as possible, I did eventually find a solution that works for me albeit several third party tools are required.

Problem Statement: Automatically join a Zoom call for a meeting scheduled in calendar without user interaction (on MacOS X).

Prerequisite:

  • Alfred
    (Unclear if you need to be a paid user to create custom workflows, the author is a paid user)
  • zoom-calendar.alfredworkflow
    (Yep, I found this alfred workflow by chance and based my work and this blog on this workflow, it is very handy and I would really like thank the author for creating this.)
  • Automator
    (The builtin automation app in MacOS from Apple)

Solution:

Assume you have already installed the Alfred App, you will need to go to this Github Repo, follow the instructions given and install the Alfred workflow.

Once the workflow has been installed, we need to do some tweaking. Add an external trigger to this workflow and give it an id of ‘Automator’

Now, open up Automator, choose the Calendar Alarm workflow as shown in the screenshot below:

Copy and paste the following code to the calendar alarm:

Now comes tricky part. You would first need to export your calendar from the cloud, export from Google Calendar Website or whatever calendar you are using.

Then open up your Calendar app, and create a new local calendar, give it whatever name you want, in my case, I simply named it Automator. At this point, you can import the ical file exported from above.

These two steps are necessary if you want to use the automation for most of your events. If there are only a few events that you would like to add the automation on, you can just use the copy function in the Calendar app and paste to the local calendar. In any case, a new local calendar is necessary otherwise the alarm trigger would not work.

Once you completed setting up your local calendars, you can start adding the file trigger which will help you open up the Zoom. To do this, you need to modify the event of your choice and then change the alert settings, change it to custom and choose ‘Open file’ option, and then change the dropdown from ‘Calendar’ to ‘Other…’.

Normally, the file you created with the Calendar alarm will be saved to ~/Library/Workflows/Applications/Calendar so go ahead to find that folder and choose that file.

At this point, you will have a working version of the calendar automation for this event, if you want it on more events, you will need to repeat the steps of changing alerts for each of the other events.

Future improvements & Alternatives

I have to admit the solution I described above is not perfect, and it requires some steps to setup, still once I set it up, everything works fine for me, and I would never need to remember to join a Zoom meeting because of this automation.

Some future improvement and/or caveats that I found about this method is that:

  • The events must have the zoom link somewhere (either description or location) for this automation to work.
  • If there were two back-to-back meetings, the automation will fail, this is because the previous meeting hasn’t finished yet, and the given Alfred workflow will still list it at the top. I haven’t found a good solution to this.

There are several alternative ways I can think of:

  • Use Zoom itself, if you are logged into Zoom and allow them to access your calendar, they will provide a join button in their app to allow you to join the meeting without more button clicks.

  • Bookmark the Zoom url schemes and click on it. This is basically how the workflow works behind the scene: converting the url from http to zoom url scheme and then open it. I won’t go in depth on how to create a bookmark and convert the links to url schemes, but Zoom provide a great doc on their schemes here.

As a developer, you will sometimes face weird problems, it is important to come up with reliable and repeatable ways to solve this problems, so when such problems come up again, you would be able to find a solution easier. As for myself, one of the tools that I found most useful on Unix-like system is jq, which is a tool to process json files. Let me demonstrate how I use this tool to solve some problems I encountered during work.

Problem: Convert a JSON file to CSV

Example JSON

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
}
]

JQ code to generate csv:

1
jq -r '(.[0] | keys_unsorted) as $keys | ([$keys] + map([.[ $keys[] ]])) [] | @csv'

Resulting CSV:

1
2
3
4
"title","artist","album","year"
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989

Problem: Aggregate JSON object.

Example JSON:

1
2
3
4
5
{
"A": [{ "Name": "A1" }, { "Name": "A2" }],
"B": [{ "Name": "B1" }, { "Name": "B2" }],
"C": [{ "Name": "C" }]
}

The goal is to produce something like below:

1
{ "A": ["A1", "A2"], "B": ["B1", "B2"], "C": ["C"] }

It transform the object and aggregate (or compress?) them by “Name” property, I know this can be easily done with JavaScript, but jq and bash seems like more widely used and will come in handy when JavaScript is not available.

The jq code I came up with is as follows:

1
jq '[keys_unsorted[] as $k|{($k): [.[$k][]|.Name]}]|add'

References: