0%

Notes:

  • WRT1200AC (this router) contains 2 partitions
  • Flashing the firmware through the Luci interface actually flashes the new firmware to the inactive partition

Steps:

  • Download the bin file needed to upgrade the package
  • Create a list of all the installed packages on current version using opkg list-installed | cut -f 1 -d ' ' > /root/installed_packages.txt
  • Choose one of the following methods to flash:
    • Flash the file from the Luci interface
      OR
    • Download the file to the /tmp and then flash using sysupgrade /tmp/*.bin
  • After the flash and reboot, you will be in the partition that you wasn’t on before the flash, it will have all of your previous configs, but the extroot will not be there.
  • Hopefully, you will already have internet access at this point, if not, go ahead and setup internet.
  • Once your internet is up, you will need to run some commands to install packages needed to setup:
    • First, install packages that are necessary to setup extroot:
      opkg update && opkg install block-mount kmod-fs-ext4 kmod-usb-storage e2fsprogs kmod-usb-ohci kmod-usb-uhci fdisk
    • In my case I use f2fs for my extroot, which means I need extra packages, like mkf2fs to format the flash.
    • Now, format the device you plan to use for extroot, in my case, I ran mkf2fs /dev/sda1 cause the sda2 was used as swap.
    • At this point, copy the overlay to the newly formatted drive
      1
      2
      3
      4
      5
      6
      7
      mkdir -p /tmp/introot
      mkdir -p /tmp/extroot
      mount --bind / /tmp/introot
      mount /dev/sda1 /tmp/extroot
      tar -C /tmp/introot -cvf - . | tar -C /tmp/extroot -xf -
      umount /tmp/introot
      umount /tmp/extroot
    • Regenerate fstab using block detect > /etc/config/fstab;
    • Reboot
    • You should have a working Openwrt with extroot now. Change opkg/distfeeds.conf to the corresponding upgraded version.
    • Now run opkg upgrade $(opkg list-upgradable | awk '($1 !~ "^kmod|Multiple") {print $1}') to keep base packages up-to-date.
    • And install all your backed up packages using cat /root/installed_packages.txt|xargs opkg install

Because I don’t use dnsmasq, this means once the steps above finishes, I will need to do some extra post installation steps

Post installation (More as a personal note):

  • Remove odhcpd-ipv6only package and install odhcpd, this will ensure IPv4 dhcp functionality, otherwise, there will only be ipv6 addresses allocated.

November 2021 Update: Updating to Openwrt 21.02

I upgraded Openwrt version 21.02 branch in September of this year. It wasn’t as easy as I had thought, but I had the problem solved after all. In release 21.02, Openwrt introduced what’s called DSA or Distributed Switch Architecture for this WRT1200AC device. To be honest, it is counter-intuitive to setup but it seems to be a more standard way when compared to the previous configuration.

The part that I have problem with is mostly the VLAN tagging. My network ISP requires tagged VLAN to be able to use the network, so for Openwrt, I will have to tag the WAN port with the VLAN id. I was able to get around it with the following block:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
config device
option name 'wan'
option macaddr '<MAC_ADDRESS>'
config interface 'wan'
option device 'wan.10'
option proto 'pppoe'
option username '<PPPOE_USERNAME>'
option ipv6 '1'
option peerdns '0'
option password '<PPPOE_PASSWORD>'
list dns '127.0.0.1'
config interface 'wan6'
option proto 'dhcpv6'
option reqaddress 'try'
option reqprefix 'auto'
option peerdns '0'
list dns '::1'
option device '@wan

It actually is quite straightforward once you get the idea of configuration. For example, in the above shared config, a device is a layer 2 in the networking stack while an interface is at layer 3. Here is Openwrt wiki’s explanation:

Devices are physical connections that convey bits/frames to other computers. They operate at layer 2 in the protocol stack, have a MAC address along with several other configurable parameters.
Network devices identify and configure hardware components of the device: individual Ethernet switch ports, wireless radios, USB networking devices, VLANs, or virtual ethernets.
Alternatively, bridge devices group several network devices together so they can be treated as a single entity. A bridge device functions like a separate unmanaged (hardware) switch, forwarding traffic between member ports as needed at the hardware level to maintain performance. Each physical port can be a member of only a single bridge device.
Interfaces route IP packets and operate at layer 3 in the protocol stack. An interface is associated with a single device that sends/receives its packets. Interfaces get their IP address parameters by the choice of protocol: Static, DHCP, PPP, 6in4, Wireguard, OpenVPN, etc.


References:

Like it or not, 2020 has been a year where video conference are used a lot. For me, most of the meetings happens in Zoom. Finding the link to the meeting in the calendar and then clicking on it to join the meeting had gradually become a new norm and something I really don’t like (the fact that after clicking on the Zoom link brings up your browser instead of Zoom itself, and prompting you to click again to open Zoom is really a pain). As someone who would like to automate things as much as possible, I did eventually find a solution that works for me albeit several third party tools are required.

Problem Statement: Automatically join a Zoom call for a meeting scheduled in calendar without user interaction (on MacOS X).

Prerequisite:

  • Alfred
    (Unclear if you need to be a paid user to create custom workflows, the author is a paid user)
  • zoom-calendar.alfredworkflow
    (Yep, I found this alfred workflow by chance and based my work and this blog on this workflow, it is very handy and I would really like thank the author for creating this.)
  • Automator
    (The builtin automation app in MacOS from Apple)

Solution:

Assume you have already installed the Alfred App, you will need to go to this Github Repo, follow the instructions given and install the Alfred workflow.

Once the workflow has been installed, we need to do some tweaking. Add an external trigger to this workflow and give it an id of ‘Automator’

Now, open up Automator, choose the Calendar Alarm workflow as shown in the screenshot below:

Copy and paste the following code to the calendar alarm:

Now comes tricky part. You would first need to export your calendar from the cloud, export from Google Calendar Website or whatever calendar you are using.

Then open up your Calendar app, and create a new local calendar, give it whatever name you want, in my case, I simply named it Automator. At this point, you can import the ical file exported from above.

These two steps are necessary if you want to use the automation for most of your events. If there are only a few events that you would like to add the automation on, you can just use the copy function in the Calendar app and paste to the local calendar. In any case, a new local calendar is necessary otherwise the alarm trigger would not work.

Once you completed setting up your local calendars, you can start adding the file trigger which will help you open up the Zoom. To do this, you need to modify the event of your choice and then change the alert settings, change it to custom and choose ‘Open file’ option, and then change the dropdown from ‘Calendar’ to ‘Other…’.

Normally, the file you created with the Calendar alarm will be saved to ~/Library/Workflows/Applications/Calendar so go ahead to find that folder and choose that file.

At this point, you will have a working version of the calendar automation for this event, if you want it on more events, you will need to repeat the steps of changing alerts for each of the other events.

Future improvements & Alternatives

I have to admit the solution I described above is not perfect, and it requires some steps to setup, still once I set it up, everything works fine for me, and I would never need to remember to join a Zoom meeting because of this automation.

Some future improvement and/or caveats that I found about this method is that:

  • The events must have the zoom link somewhere (either description or location) for this automation to work.
  • If there were two back-to-back meetings, the automation will fail, this is because the previous meeting hasn’t finished yet, and the given Alfred workflow will still list it at the top. I haven’t found a good solution to this.

There are several alternative ways I can think of:

  • Use Zoom itself, if you are logged into Zoom and allow them to access your calendar, they will provide a join button in their app to allow you to join the meeting without more button clicks.

  • Bookmark the Zoom url schemes and click on it. This is basically how the workflow works behind the scene: converting the url from http to zoom url scheme and then open it. I won’t go in depth on how to create a bookmark and convert the links to url schemes, but Zoom provide a great doc on their schemes here.

As a developer, you will sometimes face weird problems, it is important to come up with reliable and repeatable ways to solve this problems, so when such problems come up again, you would be able to find a solution easier. As for myself, one of the tools that I found most useful on Unix-like system is jq, which is a tool to process json files. Let me demonstrate how I use this tool to solve some problems I encountered during work.

Problem: Convert a JSON file to CSV

Example JSON

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
}
]

JQ code to generate csv:

1
jq -r '(.[0] | keys_unsorted) as $keys | ([$keys] + map([.[ $keys[] ]])) [] | @csv'

Resulting CSV:

1
2
3
4
"title","artist","album","year"
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989

Problem: Aggregate JSON object.

Example JSON:

1
2
3
4
5
{
"A": [{ "Name": "A1" }, { "Name": "A2" }],
"B": [{ "Name": "B1" }, { "Name": "B2" }],
"C": [{ "Name": "C" }]
}

The goal is to produce something like below:

1
{ "A": ["A1", "A2"], "B": ["B1", "B2"], "C": ["C"] }

It transform the object and aggregate (or compress?) them by “Name” property, I know this can be easily done with JavaScript, but jq and bash seems like more widely used and will come in handy when JavaScript is not available.

The jq code I came up with is as follows:

1
jq '[keys_unsorted[] as $k|{($k): [.[$k][]|.Name]}]|add'

References:

What would be the easiest way to remove all but one word from the end of each line in Vim?

Today, I found a challenge on vimgolf and thought it might be interesting to share my solutions.

Here is the original file

1
2
3
4
abcd 1 erfg 7721231
acd 2 erfgasd 324321
acd 3 erfgsdasd 23432
abcd 5 erfgdsad 123556

Here is the desired result:

1
2
3
4
7721231
324321
23432
123556

The challenge is quite straightforward, delete all but the last characters in the file. I found several ways to tackle this challenge, so let me show you all:

Take 1: Use macro only

Record a macro, and play it back, so the keystrokes would be

1
2
3
4
5
6
7
8
qa $bd0j q 4@a <cr>

qa starts recording macro to register a
$ move cursor to the end of the line
d0 means *d*elete to the beginning of the line
j move cursor down
q finish macro
4@a repeat the macro in register a 4 times

Take 2: Use regex and %norm

It’s quite obvious that all we want to keep from the original files are the numbers. So the regex would be simple to come up with, like something as simple as /\d\+$<cr> will do. Once you type this into vim, all the numbers at the end of the line will be highlighted. Next you can do:

1
2
3
4
5
:%norm dn <cr>

% means applying to the whole file,
norm means in execute following command in normal mode
dn means *d*elete to *n*ext match

Take 3: No regex pure %norm

This is the fastest way I can come up with, still, not as fast as the top answers on Vimgolf but it is decent in my opinion. Being slightly different from the option above, it is still using %norm though:

1
2
3
4
5
%norm $bd0 <cr>  

% means applying to the whole file,
$ move cursor to the end of the line
d0 means *d*elete to the beginning of the line

Takeaways:

  • norm is a quite powerful, and can be used to achieve complex stuff that can be otherwise achieved through a macro.
  • d the delete command is useful in many unexpected ways, besides dn and d0 command mentioned above which deletes to the next match and deletes to the beginning of the line respectively. An additional useful variation of d command is d4/x where 4/x meaning 4th occurrence of x.

This week, I was tasked with creating basic infrastructure for one of our newwebsites. We use Fastly for our CDN and New Relic as a log aggregation tool, most of our infrastructure is setup using Terraform, which is a popular infrastructure as code (IaC) platform. Terraform supports Fastly through a custom plugin, which also has the capability of New Relic. We need to customize the format of New Relic so that we can find the logs easily in New Relic. As expected, the plugin actually provide a pretty straight forward to accomplish this – what you need to do is to come up with a proper json string that align with the format that Fastly provided in their document. This seemingly straightforward task ended up took me some time to debug.

In terraform, you can create objects, and an object can be converted to string by using jsonencode function. This is where the pitfall comes in: Fastly’s formatting document said they have a option called “%>s” which represents the final status of the fastly request. From my perspective, it would definitely be helpful if we can include this when we send the log to Fastly. So I added this to my formatting object, and then jsonencode the object. To my suprise, I get an error saying that Fastly failed to parse the formatting option I created, which is quite strange. I then started to debug, I export TF_LOG=”DEBUG”, which asked Terraform to give me all the debug logs. To my suprise, I found that the “%>s” was jsonencode to “%\u003es” by terraform, and thus causing the error. How come “>” sign is encoded in Terraform? It turned out it’s for backward compatibility with ealier edition of Terraform. And according to their documentation

When encoding strings, this function escapes some characters using Unicode escape sequences: replacing <, >, &, U+2028, and U+2029 with \u003c, \u003e, \u0026, \u2028, and \u2029. This is to preserve compatibility with Terraform 0.11 behavior.

they encode much more characters for you without you realizing it. I really hope they can provide a way to opt-out of this feature or make this an opt-in, it will make things way easier for developers.

Recently, I have been eager to get the data saver on Android to work automatically for me. My goal is to have Tasker automatically enable the built-in data saver when my data is low. Before I came up with this idea, I have search the web to see if there is any solutions exist already so I don’t have to reinventing the wheel. Unfortunately, I couldn’t find anything on this topic so I have to create my own solution. Upon 1 week of testing, things to work as I expected so I decided to share in this article how I did it.

Prerequisite

  • An rooted Android phone – Unfortunately, the solution I came up with only works on rooted Android phone, I’m running Android 9.0.
  • A way to get to know you current data usage – Some carriers allow you to get you data usage through SMS, some might only allow you to use USSD to get data usage. I am only covering the case of using SMS here (my case).

How-to

The data saver in Android is basically something that limits all the background data usage. After some digging on the Internet, I found that you can turn the data saver on with the following command cmd netpolicy set restrict-background true (requires root). Then it is pretty easy to have the data saver turned on automatically when the data is low. My way of doing this is as follows:

  • Set a global variable as threshold for triggering the data saver.
  • Send an SMS to my carrier every morning at 8 AM, parsing the SMS to get my remaining data.
  • If my data remaining is lower than the threshold then turn on the data saver whenever mobile data is in use. This is done by using the run shell action and run the command mentioned above.
  • Turn data saver off automatically when I am connected to WiFi (Not necessary but add it just in case).

It is quite easy to do this when you carrier allows you to query data usage using SMS, with USSD however, things is not that easy and unfortunately, I haven’t figure out a way yet.

Recently, I decided to convert my QEMU based virtual machine installed on my Manjaro Linux to the VirtualBox format. The reason behind this is that I would like to be able to use the same VM across different host system (specifically Manjaro Linux and Windows 7). It is not a easy thing to do, so I decided to document it for future references.

Prerequisite?

  • An existing image created using QEMU (My VM file end with .img, for example)
  • VirtualBox

How to?

First thing first, you would need to convert the QEMU image (extension img) to raw format, this can be done by issuing the following command:

qemu-img convert Windows7-compressed.img -O raw Windows7.raw

This will generate a raw image. Note that this newly generated file might be a lot larger than the file it based on, this is because the img file allocates space on as-is basis.

After you get the raw image, it’s time to convert it to VDI format (which is used by VirtualBox). You can do this by running:

VBoxManage convertfromraw Windows7.raw --format vdi Windows7.vdi

Then, it is recommended to compact the image:

VBoxManage modifyhd Windows7.vdi --compact

So after the previous step, you will have a working VirtualBox image, but if you boot it from VirtualBox, it might not work.

Gotchas!

In my case, what I was trying to convert was a Windows 7 VM, and when I finished the above steps and try to boot the VM, I got a BSOD. My feeling is that there were some default that QEMU used that doesn’t work for a newly created machine in VirtualBox. I tweak the following settings in the newly created VirtualBox VM:

  • Delete the auto created SATA controller and change it IDE controller.

It turned out, after doing that, everything works as expected.

Why Tasker?

This is probably an article that is long overdue to me personally. I have been an Android user since 2011, started with my Nexus S that I bought for use in college. It seems to me, the app Tasker had been very famous in the Andorid community, especially among users who know how to program.
For those who have never heard of this app, this is a powerful app that allows you to automate almost anything that you can think of on the Android system.For me,It was not until August of last year when I bought the tasker app that I started to realize how powerful this app is. I am really not a big fan of its old school UI and design so I didn’t start using it until this year. After uisng it for a while, I did come to realize that there is no other apps close to it, if I were to move to iOS, this is probably among one the apps that I will miss. In this article I will explain how I used takser to automate things for myself, it is some boring stuff, but it did become something that I’m using everyday.

What can you use it for?

I will list one of the most used Tasks/Profiles that I created and used in this app:

Turn off/on ADB when using certain apps.

This is one of the easy ones that I found very useful. It is not uncommon nowadays for some apps to require you to turn off your ADB (Android Debugging Bridge) settings when you are using them, which to me, is quite annoying. So naturally, I created a Tasker profile together with a task to automate this. The trick here is you need to have root access, otherwise you are pretty much out of luck for this particular example. To create such a task, assuming you already have proper root access, go to the TASKS page in the app and click on the add icon, choose a name you like, and then you can create your first task! Think of a task as the things you want Tasker to do for you. In this particular case, my goal is simple: turn off ADB if it was not already off, turn it on if it was off. This way we can have one task that a turns off the ADB when you turns on the app; when you turns the app off, ADB will be switch back to on.

It is clear that we will need a global variable that holds the status of the adb status, to do this add an action: click on the button add icon while you are in the “Task Edit” page, and filter based on Shell, you will see a “Run Shell” as a result. Click on it, and in the “Command” input, enter “settings get global adb_enabled” and in “Store Output In” input, choose a global variable name you would like to use to hold the status of the current ADB status, just remember that you need to use all caps for the name for tasker to know it is an global variable. Also remember to check the “Use Root” checkbox. After this step, things will be simple, just add the if and else condition like I mentioned before, set adb_enabled to 0 if it is 1 and set it to 1 if it is 0, and after that don’t forget to set the global variable again.

TL;DR Here is the XML, that you can directly import to your Tasker if you don’t want to create it yourself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<TaskerData sr="" dvi="1" tv="5.2.bf">
<Task sr="task3">
<cdate>1526421645228</cdate>
<edate>1529892193595</edate>
<id>3</id>
<nme>adb_auto</nme>
<pri>1006</pri>
<Action sr="act0" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings get global adb_enabled</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
<Action sr="act1" ve="7">
<code>37</code>
<ConditionList sr="if">
<Condition sr="c0" ve="3">
<lhs>%ADB_STATUS</lhs>
<op>0</op>
<rhs>1</rhs>
</Condition>
</ConditionList>
</Action>
<Action sr="act2" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings put global adb_enabled 0</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
<Action sr="act3" ve="7">
<code>43</code>
</Action>
<Action sr="act4" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings put global adb_enabled 1</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
<Action sr="act5" ve="7">
<code>38</code>
</Action>
<Action sr="act6" ve="7">
<code>123</code>
<Str sr="arg0" ve="3">settings get global adb_enabled</Str>
<Int sr="arg1" val="0"/>
<Int sr="arg2" val="1"/>
<Str sr="arg3" ve="3">%ADB_STATUS</Str>
<Str sr="arg4" ve="3"/>
<Str sr="arg5" ve="3"/>
</Action>
</Task>
</TaskerData>

To import, you need to save the above snippet to XML on your phone. In tasker app, click and hold the “TASKS” tab header, select import and choose the file

To run the task based on your predefined conditions, go to “PROFILES” tab, click on add button in the button, choose the application you want, and then, chose the task you imported and you are good to go! Now the app won’t say you can’t use it because you have ADB turned on, Tasker will turned adb off when you are using the app, and turn it on when you are not using it! :)

What is Google App Scripts?

Before explaining what I did with Google App Scripts, let me explain what is Google App Scripts. It is basically a scripting engine that let you access all kinds of Google App features (including Gmail, Google Calendar, Google Docs, etc.). There are other ways to access these features but Google App Script might be the easist way out there. It comes with an online editor (you can work offline as well), and Google can trigger your scripts at certain time based on your need. Oh, forgot to mention, Google App Script uses JavaScript which is one of the most popular languages right now (Sadly, it doesn’t support ES6 syntax as of writing).

Why use Google App Scripts?

I have a habit of archiving my Emails every week on Mondays. It is easy enough to do in the gmail Web interface but I do need to remember to do it. That make me think if I can find a good way to automate this. I would always think of using some kind of API first, but it seems to me that it wouldn’t worth it to spend time to make a app just for the purpose of archiving emails. Then I came across Google App Scripts, which turned out to be exactly what I need to automate this.

What I came up with

Here is what I came up with to archive emails that are one-week-old every Monday at midnight:

1
2
3
4
5
6
7
8
9
10
11
function gmailAutoarchive() {
var maxDate = new Date();
// Archive all threads in inbox whose last message date is older than today.
var threads = GmailApp.getInboxThreads();
for(var i in threads){
var thread = threads[i];
if (thread.getLastMessageDate()<maxDate){
thread.moveToArchive();
}
}
}

You will be able to use this script by putting it into the google script editor located here.

To have Google auto run your scripts, you need to add the trigger which can be found under Edit –> Current project’s triggers.

Variation

After creating this script, I created another script as a variation to this script that move messages with certain labels to trash.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
function gmailAutoRemove() {
var maxDate = new Date();
maxDate.setDate(maxDate.getDate() - 30); // what was the date at that time?
var label = [GmailApp.getUserLabelByName("label1"), GmailApp.getUserLabelByName("label2"), GmailApp.getUserLabelByName("label3")];
label.forEach(removeByLabel(maxDate));
}


function removeByLabel(maxDate){
return function(label){
var threads = label.getThreads(0, 100);
for(var i in threads){
var thread = threads[i];
if (thread.getLastMessageDate() < maxDate){
thread.moveToTrash();
}
}
}
}

References: Auto archive emails in Gmail after 2 days

Fun with GPG

This week, I was having making some great progress in understanding how GPG works either locally or through email. The original intention to do all this was because I would like my router to send me notification whenever the tranmission finish download the torrent. This may be simple as it sounds and it had been working correctly for me for several months since I create the initial script.

This week, however, I decided to do something special. I would like the router to sign/encrypt the message it sends to me. I am not sure why I need that, but anyways, I did get to learn a lot through the process.

Here is my original script, it simply uses the mailx program that installed on the router and sends the email through SMTP. Looks quite simple, it pretty much is the same as the script I have shown in the previous post:

1
2
3
4
5
6
7
8
#!/bin/sh

SMTP_SERVER=YOUR_EMAIL_SMTP
MESSAGE="Hello!\n\n \tThis is a notification from transmission, $TR_TORRENT_NAME has been completed on $TR_TIME_LOCALTIME\n\n Thanks!"
SENDER="YOUR_EMAIL_USER_NAME"
RECEPIENT="EMAIL_TO_RECEIVE"

printf "$MESSAGE"|mailx -vr $SENDER -s "[Transmission] Torrent Has Been Downloaded" -S smtp=$SMTP_SERVER -S smtp-use-starttls -S smtp-auth=login -S smtp-auth-user=$SENDER -S smtp-auth-password="YOUR_EMAIL_PASSWORD" -S ssl-verify=ignore $RECEPIENT

A little more explanation here, there are two variables preset by Transmission. $TR_TORRENT_NAME, is the name of the torrent that has just been finished. $TR_TIME_LOCALTIME is the time when the download was finished. There were several other environment variables set by transmission also. Here is a list of them^1
Note: The meaning of this variable are not explicitly documented in the wiki, and I guess the meaning based on my understanding.

Env Variable Name Meaning
TR_APP_VERSION The version of the transmission app.
TR_TIME_LOCALTIME The time when the current torrent has been downloaded.
TR_TORRENT_DIR The directory that the content of the torrent was downloaded to.
TR_TORRENT_HASH The hash value of the torrent.
TR_TORRENT_ID The ID of this torrent (in the download list for transmission bookeeping?).
TR_TORRENT_NAME The name of the torrent.

So my initial thought was that adding the GPG encryption or signing is as easy as adding a new pipeline that redirects the output to GPG. However, it turned out to be much more difficult than that. When the script is called by transmission, it doesn’t set the environment variable required by GPG and because of this GPG would failed to find the private key used to sign/encrypt the message and therefore failed to encrypt. After setting the environmental variable in the scripts the GPG encryption works correctly. Here is the working script with encryption and signing.

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/sh

SMTP_SERVER=YOUR_EMAIL_SMTP
MESSAGE="Hello!\n\n \tThis is a notification from transmission, $TR_TORRENT_NAME has been completed on $TR_TIME_LOCALTIME\n\n Thanks!"
SENDER="YOUR_EMAIL_USER_NAME"
RECEPIENT="EMAIL_TO_RECEIVE"
HOME="YOUR HOME DIRECTORY"
GPGHOME="YOUR .gpg DIRECTORY"
export HOME=$HOME
export GPGHOME=$GPGHOME

printf "$MESSAGE"|gpg --sign --encrypt --passphrase "your pass phrase" --batch --armor --encrypt -r recipient_pubkey_id |mailx -vr $SENDER -s "[Transmission] Torrent Has Been Downloaded" -S smtp=$SMTP_SERVER -S smtp-use-starttls -S smtp-auth=login -S smtp-auth-user=$SENDER -S smtp-auth-password="YOUR_EMAIL_PASSWORD" -S ssl-verify=ignore $RECEPIENT