0%

This week I encountered some issues with Terraform (and, well, Kubernetes) again. This time, the problem was way more interesting than I thought.

Problem

When deploying to Kubernetes, I got dial tcp 127.0.0.1:80: connect: connection refused, connection reset error.

The more specific error message I got is

Error: Get “http://localhost/apis/apps/v1/namespaces/default/deployments/xxx": dial tcp 127.0.0.1:80: connect: connection refused

As this error happened in our deployment pipeline (we use Terraform to deploy stuff to Kubernetes), my natural thought was that this can be solved easily with a retry. So I retried the deployment right away, and it still failed.

When I finally stopped what I was working on and start to examine the message carefully, I realized it is quite strange: how come the pipeline (or the Kubectl for that matter) trying to connect to localhost when it is meant to connect to a Kubernetes cluster located somewhere else?

As you will see from my solution, this message was not helpful at all and in some sense quite misleading to someone who is trying to debug.

After comparing the log from a previous successful deployment and the said failed deployment. I realized the issue was with the Kubernetes provider for Terraform: while in the successful build, the terraform init command yield something like Installing hashicorp/kubernetes v1.13.3..., in the failed build the same command yield something like Installing hashicorp/kubernetes v2.0.2....

It is quite obvious that this issue was caused by breaking changes in the Terraform provider. According to their changelog, there were several breaking changes in the 2.0.0 version, among them were these two:

Remove load_config_file attribute from provider block (#1052)
Remove default of ~/.kube/config for config_path (#1052)

In our deployment Terraform, we set load_config_file to true to load the kube_config file from the default config_path of ~/.kube/config. Due to the breaking changes quoted above, neither the load_config_file nor the default config_path existed any more, and when Kubernetes can not find these two files, it will try to connect to the 127.0.0.1 (aka localhost) as a fallback which caused the connection refused error.

Solution

There are two kind of solutions to this issue:

  • Updating the Terraform code so it is compatible with the 2.0.0 version of the Kubernetes provider
    OR
  • Downgrade to the last working version of the Kubernetes provider and keep the existing Terraform code

Due to the urgency of getting the pipeline and deployment back online, I chose the downgrading route. Essentially, I’m adding the version constraint to the Kubernetes provider that was previously missing:

1
2
3
4
kubernetes = {
source = "registry.terraform.io/hashicorp/kubernetes"
version = "~> 1.0"
}

Adding in the version constraint means that Terraform will only increase the rightmost version number, therefore it will not be able to upgrade to version 2.0.0 automatically and can avoid this specific problem that was caused by breaking changes.

Takeaways

On debugging:

  • Generally speaking, if you found your Terraform changed behavior without you making any changes, you could be making the same mistake as I did: not specifying the version constraint for the provider. You can find some clues in your terraform init command, for example, by comparing if the same provider version was used on the two builds where one was successful, the other failed.
  • Personally, I was never familiar enough with Kubernetes to know that the default behavior of Kubectl is to use 127.0.0.1 when there’s no config file present. Now that I came across this gotcha, I do realize this kind of behavior was not that uncommon per se: Knex which is the library we used for node.js also have similar behavior, and I will keep this in mind if I encounter something similar in the future.

On Terraform:

  • When there’s no version constraint specified, Terraform will always use the latest provider version. Therefore, it is important to specify the version constraint. It is recommended by Terraform to always use a specific version when using third party modules. For more information on specifying the version constraint, read the documentation from their website.

Recently, I started to build an application with Go, it is a quite simple application that does something very basic and then sends an notification to a telegram bot. It’s quite obvious to me this kind of application is quite suitable to run as Lambda, and that’s where I decided to deploy my application to once it is working well locally. It turned out that I had to solve several issues I encountered. Here I share how I solved those issues, so you don’t have to scratch your head when encounter them.

Attempt 1: Deploy the application through the web interface.

For my first attempt on deploying the application, my goal is to make things as simple as possible.
Therefore, I chose to use the web interface. From the web interface, there’s an option to upload zip file and that’s where I began.

Problem: Compile GO in an static way

This problem happens quite often from what I see on the internet. The main issue with here is that, some of the libraries in GO uses a feature called CGO which means using C code in GO, and when this feature is in use, GO compiler will try to create dynamic binary.

To solve this problem, it is often as simple as compiled the code to a statically linked binary. Do note that, some of the code that’s compiling use GCC was not working, this is because often times the GLIBC library is higher than the ones used in AWS lambda environment, at least that was the case for me (I am on an Linux laptop with Manjaro Linux).

I was able to find something called musl-gcc and then used it in my compilation

1
2
build:
CC="musl-gcc" go build --ldflags '-linkmode external -extldflags "-static"' ./main.go

This proves to be working fine, once I complied the binary, zip and upload it to lambda through the interface, everything seems to be working.

Attempt 2: Deploy the application through AWS SAM

Often, it is not efficient to manually upload the code using a zip file through every time, that’s why I started to thinking about introducing SAM as a tool to simplified the process of deployment. This was when I encountered the second issue.

Problem: Asking SAM to compile the GO program in an static way.

As the line above says, SAM always compiled the code in a dynamic way which is why the binary fails to work again even locally using the command sam invoke local.

Now it’s the time to tell SAM I don’t want dynamically linked binaries. As a matter of fact, none of the article available online has a direct answer to my question, fortunately, I did find an AWS documentation on using custom runtime. Based on this article, a GO program that wants to utilize static linking can have the following template:

1
2
3
4
5
6
7
8
9
10
11
Resources:
HelloGOFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: HelloGO
Handler: main
Runtime: go1.x
MemorySize: 512
CodeUri: .
Metadata:
BuildMethod: makefile

And in the MakeFile, the following corresponding entries need to be added:

1
2
3
build-HelloGOFunction:
CC="musl-gcc" go build --ldflags '-linkmode external -extldflags "-static"' ./main.go
cp ./main $(ARTIFACTS_DIR)

Doing so will make sure compiled binary are statically linked and works on lambda when bundle and uploaded.

Notes:

  • WRT1200AC (this router) contains 2 partitions
  • Flashing the firmware through the Luci interface actually flashes the new firmware to the inactive partition

Steps:

  • Download the bin file needed to upgrade the package
  • Create a list of all the installed packages on current version using opkg list-installed | cut -f 1 -d ' ' > /root/installed_packages.txt
  • Choose one of the following methods to flash:
    • Flash the file from the Luci interface
      OR
    • Download the file to the /tmp and then flash using sysupgrade /tmp/*.bin
  • After the flash and reboot, you will be in the partition that you wasn’t on before the flash, it will have all of your previous configs, but the extroot will not be there.
  • Hopefully, you will already have internet access at this point, if not, go ahead and setup internet.
  • Once your internet is up, you will need to run some commands to install packages needed to setup:
    • First, install packages that are necessary to setup extroot:
      opkg update && opkg install block-mount kmod-fs-ext4 kmod-usb-storage e2fsprogs kmod-usb-ohci kmod-usb-uhci fdisk
    • In my case I use f2fs for my extroot, which means I need extra packages, like mkf2fs to format the flash.
    • Now, format the device you plan to use for extroot, in my case, I ran mkf2fs /dev/sda1 cause the sda2 was used as swap.
    • At this point, copy the overlay to the newly formatted drive
      mkdir -p /tmp/introot mkdir -p /tmp/extroot mount --bind / /tmp/introot mount /dev/sda1 /tmp/extroot tar -C /tmp/introot -cvf - . | tar -C /tmp/extroot -xf - umount /tmp/introot umount /tmp/extroot
    • Regenerate fstab using block detect > /etc/config/fstab;
    • Reboot
    • You should have a working Openwrt with extroot now. Change opkg/distfeeds.conf to the corresponding upgraded version.
    • Now run opkg upgrade $(opkg list-upgradable | awk '($1 !~ "^kmod|Multiple") {print $1}') to keep base packages up-to-date.
    • And install all your backed up packages using cat /root/installed_packages.txt|xargs opkg install

Because I don’t use dnsmasq, this means once the steps above finishes, I will need to do some extra post installation steps

Post installation (More as a personal note):

  • Remove odhcpd-ipv6only package and install odhcpd, this will ensure IPv4 dhcp functionality, otherwise, there will only be ipv6 addresses allocated.

Like it or not, 2020 has been a year where video conference are used a lot. For me, most of the meetings happens in Zoom. Finding the link to the meeting in the calendar and then clicking on it to join the meeting had gradually become a new norm and something I really don’t like (the fact that after clicking on the Zoom link brings up your browser instead of Zoom itself, and prompting you to click again to open Zoom is really a pain). As someone who would like to automate things as much as possible, I did eventually find a solution that works for me albeit several third party tools are required.

Problem Statement: Automatically join a Zoom call for a meeting scheduled in calendar without user interaction (on MacOS X).

Prerequisite:

  • Alfred
    (Unclear if you need to be a paid user to create custom workflows, the author is a paid user)
  • zoom-calendar.alfredworkflow
    (Yep, I found this alfred workflow by chance and based my work and this blog on this workflow, it is very handy and I would really like thank the author for creating this.)
  • Automator
    (The builtin automation app in MacOS from Apple)

Solution:

Assume you have already installed the Alfred App, you will need to go to this Github Repo, follow the instructions given and install the Alfred workflow.

Once the workflow has been installed, we need to do some tweaking. Add an external trigger to this workflow and give it an id of ‘Automator’

Now, open up Automator, choose the Calendar Alarm workflow as shown in the screenshot below:

Copy and paste the following code to the calendar alarm:

Now comes tricky part. You would first need to export your calendar from the cloud, export from Google Calendar Website or whatever calendar you are using.

Then open up your Calendar app, and create a new local calendar, give it whatever name you want, in my case, I simply named it Automator. At this point, you can import the ical file exported from above.

These two steps are necessary if you want to use the automation for most of your events. If there are only a few events that you would like to add the automation on, you can just use the copy function in the Calendar app and paste to the local calendar. In any case, a new local calendar is necessary otherwise the alarm trigger would not work.

Once you completed setting up your local calendars, you can start adding the file trigger which will help you open up the Zoom. To do this, you need to modify the event of your choice and then change the alert settings, change it to custom and choose ‘Open file’ option, and then change the dropdown from ‘Calendar’ to ‘Other…’.

Normally, the file you created with the Calendar alarm will be saved to ~/Library/Workflows/Applications/Calendar so go ahead to find that folder and choose that file.

At this point, you will have a working version of the calendar automation for this event, if you want it on more events, you will need to repeat the steps of changing alerts for each of the other events.

Future improvements & Alternatives

I have to admit the solution I described above is not perfect, and it requires some steps to setup, still once I set it up, everything works fine for me, and I would never need to remember to join a Zoom meeting because of this automation.

Some future improvement and/or caveats that I found about this method is that:

  • The events must have the zoom link somewhere (either description or location) for this automation to work.
  • If there were two back-to-back meetings, the automation will fail, this is because the previous meeting hasn’t finished yet, and the given Alfred workflow will still list it at the top. I haven’t found a good solution to this.

There are several alternative ways I can think of:

  • Use Zoom itself, if you are logged into Zoom and allow them to access your calendar, they will provide a join button in their app to allow you to join the meeting without more button clicks.

  • Bookmark the Zoom url schemes and click on it. This is basically how the workflow works behind the scene: converting the url from http to zoom url scheme and then open it. I won’t go in depth on how to create a bookmark and convert the links to url schemes, but Zoom provide a great doc on their schemes here.

As a developer, you will sometimes face weird problems, it is important to come up with reliable and repeatable ways to solve this problems, so when such problems come up again, you would be able to find a solution easier. As for myself, one of the tools that I found most useful on Unix-like system is jq, which is a tool to process json files. Let me demonstrate how I use this tool to solve some problems I encountered during work.

Problem: Convert a JSON file to CSV

Example JSON

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
}
]

JQ code to generate csv:

1
jq -r '(.[0] | keys_unsorted) as $keys | ([$keys] + map([.[ $keys[] ]])) [] | @csv'

Resulting CSV:

1
2
3
4
"title","artist","album","year"
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989

Problem: Aggregate JSON object.

Example JSON:

1
2
3
4
5
{
"A": [{ "Name": "A1" }, { "Name": "A2" }],
"B": [{ "Name": "B1" }, { "Name": "B2" }],
"C": [{ "Name": "C" }]
}

The goal is to produce something like below:

1
{ "A": ["A1", "A2"], "B": ["B1", "B2"], "C": ["C"] }

It transform the object and aggregate (or compress?) them by “Name” property, I know this can be easily done with JavaScript, but jq and bash seems like more widely used and will come in handy when JavaScript is not available.

The jq code I came up with is as follows:

1
jq '[keys_unsorted[] as $k|{($k): [.[$k][]|.Name]}]|add'

References:

What would be the easiest way to remove all but one word from the end of each line in Vim?

Today, I found a challenge on vimgolf and thought it might be interesting to share my solutions.

Here is the original file

1
2
3
4
abcd 1 erfg 7721231
acd 2 erfgasd 324321
acd 3 erfgsdasd 23432
abcd 5 erfgdsad 123556

Here is the desired result:

1
2
3
4
7721231
324321
23432
123556

The challenge is quite straightforward, delete all but the last characters in the file. I found several ways to tackle this challenge, so let me show you all:

Take 1: Use macro only

Record a macro, and play it back, so the keystrokes would be

1
2
3
4
5
6
7
8
qa $bd0j q 4@a <cr>

qa starts recording macro to register a
$ move cursor to the end of the line
d0 means *d*elete to the beginning of the line
j move cursor down
q finish macro
4@a repeat the macro in register a 4 times

Take 2: Use regex and %norm

It’s quite obvious that all we want to keep from the original files are the numbers. So the regex would be simple to come up with, like something as simple as /\d\+$<cr> will do. Once you type this into vim, all the numbers at the end of the line will be highlighted. Next you can do:

1
2
3
4
5
:%norm dn <cr>

% means applying to the whole file,
norm means in execute following command in normal mode
dn means *d*elete to *n*ext match

Take 3: No regex pure %norm

This is the fastest way I can come up with, still, not as fast as the top answers on Vimgolf but it is decent in my opinion. Being slightly different from the option above, it is still using %norm though:

1
2
3
4
5
%norm $bd0 <cr>  

% means applying to the whole file,
$ move cursor to the end of the line
d0 means *d*elete to the beginning of the line

Takeaways:

  • norm is a quite powerful, and can be used to achieve complex stuff that can be otherwise achieved through a macro.
  • d the delete command is useful in many unexpected ways, besides dn and d0 command mentioned above which deletes to the next match and deletes to the beginning of the line respectively. An additional useful variation of d command is d4/x where 4/x meaning 4th occurrence of x.

This week, I was tasked with creating basic infrastructure for one of our newwebsites. We use Fastly for our CDN and New Relic as a log aggregation tool, most of our infrastructure is setup using Terraform, which is a popular infrastructure as code (IaC) platform. Terraform supports Fastly through a custom plugin, which also has the capability of New Relic. We need to customize the format of New Relic so that we can find the logs easily in New Relic. As expected, the plugin actually provide a pretty straight forward to accomplish this – what you need to do is to come up with a proper json string that align with the format that Fastly provided in their document. This seemingly straightforward task ended up took me some time to debug.

In terraform, you can create objects, and an object can be converted to string by using jsonencode function. This is where the pitfall comes in: Fastly’s formatting document said they have a option called “%>s” which represents the final status of the fastly request. From my perspective, it would definitely be helpful if we can include this when we send the log to Fastly. So I added this to my formatting object, and then jsonencode the object. To my suprise, I get an error saying that Fastly failed to parse the formatting option I created, which is quite strange. I then started to debug, I export TF_LOG=”DEBUG”, which asked Terraform to give me all the debug logs. To my suprise, I found that the “%>s” was jsonencode to “%\u003es” by terraform, and thus causing the error. How come “>” sign is encoded in Terraform? It turned out it’s for backward compatibility with ealier edition of Terraform. And according to their documentation

When encoding strings, this function escapes some characters using Unicode escape sequences: replacing <, >, &, U+2028, and U+2029 with \u003c, \u003e, \u0026, \u2028, and \u2029. This is to preserve compatibility with Terraform 0.11 behavior.

they encode much more characters for you without you realizing it. I really hope they can provide a way to opt-out of this feature or make this an opt-in, it will make things way easier for developers.

Recently, I have been eager to get the data saver on Android to work automatically for me. My goal is to have Tasker automatically enable the built-in data saver when my data is low. Before I came up with this idea, I have search the web to see if there is any solutions exist already so I don’t have to reinventing the wheel. Unfortunately, I couldn’t find anything on this topic so I have to create my own solution. Upon 1 week of testing, things to work as I expected so I decided to share in this article how I did it.

Prerequisite

  • An rooted Android phone – Unfortunately, the solution I came up with only works on rooted Android phone, I’m running Android 9.0.
  • A way to get to know you current data usage – Some carriers allow you to get you data usage through SMS, some might only allow you to use USSD to get data usage. I am only covering the case of using SMS here (my case).

How-to

The data saver in Android is basically something that limits all the background data usage. After some digging on the Internet, I found that you can turn the data saver on with the following command cmd netpolicy set restrict-background true (requires root). Then it is pretty easy to have the data saver turned on automatically when the data is low. My way of doing this is as follows:

  • Set a global variable as threshold for triggering the data saver.
  • Send an SMS to my carrier every morning at 8 AM, parsing the SMS to get my remaining data.
  • If my data remaining is lower than the threshold then turn on the data saver whenever mobile data is in use. This is done by using the run shell action and run the command mentioned above.
  • Turn data saver off automatically when I am connected to WiFi (Not necessary but add it just in case).

It is quite easy to do this when you carrier allows you to query data usage using SMS, with USSD however, things is not that easy and unfortunately, I haven’t figure out a way yet.

Recently, I decided to convert my QEMU based virtual machine installed on my Manjaro Linux to the VirtualBox format. The reason behind this is that I would like to be able to use the same VM across different host system (specifically Manjaro Linux and Windows 7). It is not a easy thing to do, so I decided to document it for future references.

Prerequisite?

  • An existing image created using QEMU (My VM file end with .img, for example)
  • VirtualBox

How to?

First thing first, you would need to convert the QEMU image (extension img) to raw format, this can be done by issuing the following command:

qemu-img convert Windows7-compressed.img -O raw Windows7.raw

This will generate a raw image. Note that this newly generated file might be a lot larger than the file it based on, this is because the img file allocates space on as-is basis.

After you get the raw image, it’s time to convert it to VDI format (which is used by VirtualBox). You can do this by running:

VBoxManage convertfromraw Windows7.raw --format vdi Windows7.vdi

Then, it is recommended to compact the image:

VBoxManage modifyhd Windows7.vdi --compact

So after the previous step, you will have a working VirtualBox image, but if you boot it from VirtualBox, it might not work.

Gotchas!

In my case, what I was trying to convert was a Windows 7 VM, and when I finished the above steps and try to boot the VM, I got a BSOD. My feeling is that there were some default that QEMU used that doesn’t work for a newly created machine in VirtualBox. I tweak the following settings in the newly created VirtualBox VM:

  • Delete the auto created SATA controller and change it IDE controller.

It turned out, after doing that, everything works as expected.

Why Tasker?

This is probably an article that is long overdue to me personally. I have been an Android user since 2011, started with my Nexus S that I bought for use in college. It seems to me, the app Tasker had been very famous in the Andorid community, especially among users who know how to program.
For those who have never heard of this app, this is a powerful app that allows you to automate almost anything that you can think of on the Android system.For me,It was not until August of last year when I bought the tasker app that I started to realize how powerful this app is. I am really not a big fan of its old school UI and design so I didn’t start using it until this year. After uisng it for a while, I did come to realize that there is no other apps close to it, if I were to move to iOS, this is probably among one the apps that I will miss. In this article I will explain how I used takser to automate things for myself, it is some boring stuff, but it did become something that I’m using everyday.

What can you use it for?

I will list one of the most used Tasks/Profiles that I created and used in this app:

Turn off/on ADB when using certain apps.

This is one of the easy ones that I found very useful. It is not uncommon nowadays for some apps to require you to turn off your ADB (Android Debugging Bridge) settings when you are using them, which to me, is quite annoying. So naturally, I created a Tasker profile together with a task to automate this. The trick here is you need to have root access, otherwise you are pretty much out of luck for this particular example. To create such a task, assuming you already have proper root access, go to the TASKS page in the app and click on the add icon, choose a name you like, and then you can create your first task! Think of a task as the things you want Tasker to do for you. In this particular case, my goal is simple: turn off ADB if it was not already off, turn it on if it was off. This way we can have one task that a turns off the ADB when you turns on the app; when you turns the app off, ADB will be switch back to on.

It is clear that we will need a global variable that holds the status of the adb status, to do this add an action: click on the button add icon while you are in the “Task Edit” page, and filter based on Shell, you will see a “Run Shell” as a result. Click on it, and in the “Command” input, enter “settings get global adb_enabled” and in “Store Output In” input, choose a global variable name you would like to use to hold the status of the current ADB status, just remember that you need to use all caps for the name for tasker to know it is an global variable. Also remember to check the “Use Root” checkbox. After this step, things will be simple, just add the if and else condition like I mentioned before, set adb_enabled to 0 if it is 1 and set it to 1 if it is 0, and after that don’t forget to set the global variable again.

TL;DR Here is the XML, that you can directly import to your Tasker if you don’t want to create it yourself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<TaskerData sr="" dvi="1" tv="5.2.bf">
<Task sr="task3">
<cdate>1526421645228</cdate>
<edate>1529892193595</edate>
<id>3</id>
<nme>adb_auto</nme>
<pri>1006</pri>
<Action sr="act0" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings get global adb_enabled</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
<Action sr="act1" ve="7">
<code>37</code>
<ConditionList sr="if">
<Condition sr="c0" ve="3">
<lhs>%ADB_STATUS</lhs>
<op>0</op>
<rhs>1</rhs>
</Condition>
</ConditionList>
</Action>
<Action sr="act2" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings put global adb_enabled 0</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
<Action sr="act3" ve="7">
<code>43</code>
</Action>
<Action sr="act4" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings put global adb_enabled 1</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
<Action sr="act5" ve="7">
<code>38</code>
</Action>
<Action sr="act6" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings get global adb_enabled</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
</Task>
</TaskerData>

To import, you need to save the above snippet to XML on your phone. In tasker app, click and hold the “TASKS” tab header, select import and choose the file

To run the task based on your predefined conditions, go to “PROFILES” tab, click on add button in the button, choose the application you want, and then, chose the task you imported and you are good to go! Now the app won’t say you can’t use it because you have ADB turned on, Tasker will turned adb off when you are using the app, and turn it on when you are not using it! :)