0%

This week, while setting up local project for work, I encountered some weired issue during the unit test and this has something to do with postgres and its default settings under Windows and other OS.

Problem

As an example, consider the following array: [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ]

Sorting this in JavaScript results in case sensitive result, where upper case always come first:

1
2
>>> [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ].sort()
[ "A", "CD", "Capacitor", "D", "a", "b", "c", "d" ]

Sorting this in Postgres SQL with default installation will yield a case insensitive sorting where upper and lower case are mixed:

1
2
3
4
5
6
7
8
9
10
11
12
SELECT regexp_split_to_table('D d a A c b CD Capacitor', ' ') ORDER BY 1;

regexp_split_to_table
-----------------------
a
A
b
c
Capacitor
CD
d
D

The goal here is to make sorting consistent, so we can either fix the Postgres side or fix the JavaScript side.

Investigation

Looking carefully at each of these results, it is not difficult to realize that the sorting in JavaScript is based on ASCII, i.e. character A has an ASCII code of 32 where as a has ASCII code of 97, so A comes first. This IS NOT the proper way to do string sorting in JavaScript.

For the result that Postgres gave, it is a bit more complex. Postgres uses LC_COLLATE to determine the sort order of the array.This variable comes from the system and different OS have different implementation, when using locale C or POSIX, strings are sorted according to their ASCII value, any other locales will result in a case insensitive result.

Solution

Here comes the solution part, as mentioned earlier, we can either fix the JavaScript or fix the Postgres, so I’ll present the solutions separately.

There are several ways to make the sorting case insensitive in JavaScript:

  • [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ].sort((a,b) => a.localeCompare(b))
    This uses localeCompare from string prototype which have some performance implication on larger array.

  • [ 'D', 'd', 'a', 'A', 'c', 'b', 'CD', 'Capacitor' ].sort(new Intl.Collator('en_us').compare)
    This is the recommended by MDN to sort larger arrays.

For Postgres, the first thing that needs to be noted is that Postgres recommends against using locales if it can be avoided, from their documentation:

The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.

The one-off way to fix the sorting is by specifiying the LC_COLLATE value when creating the database, for example:

1
2
3
4
5
CREATE DATABASE db 
WITH TEMPLATE = template0
ENCODING = 'UTF8'
LC_COLLATE = 'C'
LC_CTYPE = 'C';

The created database (db in this case) will use C as LC_COLLATE overriding the default LC_COLLATE value from the OS. With the new database created, you easily verify it will sort in a case sensitive way by the ASCII value once you connect to the database and run the query presented previously.

This one-off way is good enough only if you care about creating such database once. Imaging next time you create a new database, you will still have to manually override the LC_COLLATE value. So the way to go is to modify the template database, because LC_COLLATE can’t be changed once the database has been created, we will have to create a new database and set it as template.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

--- Unset template1 as template before dropping
UPDATE pg_database
SET datistemplate='false'
WHERE datname='template1';

--- Create a new database that uses C as locale
CREATE DATABASE template1
IS_TEMPLATE true
ENCODING = 'UTF8'
LC_COLLATE = 'C'
LC_CTYPE = 'C'
CONNECTION LIMIT = -1
TEMPLATE template0;

--- Create a new databse now should have C as locale.
CREATE DATABASE db3;

--- Should return C
SHOW lc_collate;

Another way to do this, is to initialize the database cluster with C locale as below:

1
2
chown -R postgres:postgres /var/lib/postgres/
su - postgres -c "initdb --locale C -D '/var/lib/postgres/data'"

this will create template database with C locale.

Takeaways

String sorting with the default method in JavaScript and most other languages is merely a comparsion based on the ASCII code, this results in upper case letters always comes first.

String sorting in PostgreSQL depends on the LC_COLLATE setting of the table which depends on the setting of the operating system, default sorting will yield results that mixes upper case and lower case, in other words, sorting is not case sensitive. There are many ways to get case sensitive sorting, but the most reliable way should be specifiying the LC_COLLATE when creating the database.

References

⚠ This article requires Rooted Android phones, please follow the instructions with caution.

Background

Since Android 5, the Android system have added the capability of multi-user, the aim is most likely to make sharing devices easier between different users in the family. Later on, Google added again something called profile, this allows enterprise to manage devices their employers use. In this article, I am going to explore some other possibilities enabled by multiuser/profile functionality. More specifically I’m going to focus on the restricted user profile.

Problem to Solve

There are several problems I’m aiming to solve with the restricted profile:

  • I want to focus on things that actually matter rather than spending too much time on my phone, I would like my phone to be without any apps that can disturb or distract me when I want to focus. But I don’t want to uninstall certain apps, as they are necessary in order to keep in touch with my family and friends.
  • I want to install certain apps that can track me into a separate space, a space where it is impossible for them to track me using the microphone, camera, or location.

Solutions

Well, as an Android user that also owns an iOS device, I have to say Apple have done a good job in many of the things I mentioned above. For example, my first problem can be solved in the iOS setting natively. Android does have something similar called digital wellbeing, but since it is from Google, I’m not that comfortable installing it on to my phone.

Anyway, I feel like the restricted profile is still a good solution to my problem on Android.

How to?

So in Android shell, there’s something called pm. This command can be used to create a new user.
Here is part of the help doc that I’m interested in:

create-user [--profileOf USER_ID] [--managed] [--restricted] [--ephemeral] [--guest] USER_NAME Create a new user with the given USER_NAME, printing the new user identifier of the user.

That is, command pm create-user can be used to create user on your Android phone. Assuming you have access to Android adb, issuing
adb shell pm create-user --restricted test will create a user named test with restricted profile, the command will show the userid of the newly created user, will be referenced as $userId in the following.

Once the user has been created, you can manage the list of the users it will have access to in the settings, this is in Settings/Multi-user.

To me, I want impose more restrictions on the user, there is indeed a way to do so:

adb shell su -c 'pm set-user-restriction --user USER_ID no_install_apps 1' – this will disallow install apps for the newly created profile, in this case no_install_apps is the name of the restriction, while the ** 1** immidiately follow is a boolean representing whether the restriction is enabled. (This command assumes you use Magsik as su)

You can find a list of restrictions available from Google’s official document. Take note of the Constant Value in the document, that’s the restriction value you need to use.

If you want to see what restriction you’ve added, you can issue adb shell dumpsys user, this command will dump all the users on the current device and the restrictions if any.

To switch to a different user, you can use am switch-user USER_ID or do so from the UI.

The easiest way to delete an user is through the settings menu. Though you can also do so through the command: pm remove-user USER_ID

Caveat

There are several issues that need to be taken into consideration when using a restricted profile:

  1. You will not have access to any of the files in the primary account, this is a big issue if you have social media apps installed on this account and want to upload image from the primary account.
  2. You will not have Google account access or root access. Most people will be okay with this, but it is a bit troublesome at times.
  3. Remember to switch the profile back, when you no longer need to be in the restricted one. For example, if you leave your restricted profile on overnight, you will not have an alarm clock if you don’t set it up correctly.

This week I encountered some issues with Terraform (and, well, Kubernetes) again. This time, the problem was way more interesting than I thought.

Problem

When deploying to Kubernetes, I got dial tcp 127.0.0.1:80: connect: connection refused, connection reset error.

The more specific error message I got is

Error: Get “http://localhost/apis/apps/v1/namespaces/default/deployments/xxx": dial tcp 127.0.0.1:80: connect: connection refused

As this error happened in our deployment pipeline (we use Terraform to deploy stuff to Kubernetes), my natural thought was that this can be solved easily with a retry. So I retried the deployment right away, and it still failed.

When I finally stopped what I was working on and start to examine the message carefully, I realized it is quite strange: how come the pipeline (or the Kubectl for that matter) trying to connect to localhost when it is meant to connect to a Kubernetes cluster located somewhere else?

As you will see from my solution, this message was not helpful at all and in some sense quite misleading to someone who is trying to debug.

After comparing the log from a previous successful deployment and the said failed deployment. I realized the issue was with the Kubernetes provider for Terraform: while in the successful build, the terraform init command yield something like Installing hashicorp/kubernetes v1.13.3..., in the failed build the same command yield something like Installing hashicorp/kubernetes v2.0.2....

It is quite obvious that this issue was caused by breaking changes in the Terraform provider. According to their changelog, there were several breaking changes in the 2.0.0 version, among them were these two:

Remove load_config_file attribute from provider block (#1052)
Remove default of ~/.kube/config for config_path (#1052)

In our deployment Terraform, we set load_config_file to true to load the kube_config file from the default config_path of ~/.kube/config. Due to the breaking changes quoted above, neither the load_config_file nor the default config_path existed any more, and when Kubernetes can not find these two files, it will try to connect to the 127.0.0.1 (aka localhost) as a fallback which caused the connection refused error.

Solution

There are two kind of solutions to this issue:

  • Updating the Terraform code so it is compatible with the 2.0.0 version of the Kubernetes provider
    OR
  • Downgrade to the last working version of the Kubernetes provider and keep the existing Terraform code

Due to the urgency of getting the pipeline and deployment back online, I chose the downgrading route. Essentially, I’m adding the version constraint to the Kubernetes provider that was previously missing:

1
2
3
4
kubernetes = {
source = "registry.terraform.io/hashicorp/kubernetes"
version = "~> 1.0"
}

Adding in the version constraint means that Terraform will only increase the rightmost version number, therefore it will not be able to upgrade to version 2.0.0 automatically and can avoid this specific problem that was caused by breaking changes.

Takeaways

On debugging:

  • Generally speaking, if you found your Terraform changed behavior without you making any changes, you could be making the same mistake as I did: not specifying the version constraint for the provider. You can find some clues in your terraform init command, for example, by comparing if the same provider version was used on the two builds where one was successful, the other failed.
  • Personally, I was never familiar enough with Kubernetes to know that the default behavior of Kubectl is to use 127.0.0.1 when there’s no config file present. Now that I came across this gotcha, I do realize this kind of behavior was not that uncommon per se: Knex which is the library we used for node.js also have similar behavior, and I will keep this in mind if I encounter something similar in the future.

On Terraform:

  • When there’s no version constraint specified, Terraform will always use the latest provider version. Therefore, it is important to specify the version constraint. It is recommended by Terraform to always use a specific version when using third party modules. For more information on specifying the version constraint, read the documentation from their website.

Recently, I started to build an application with Go, it is a quite simple application that does something very basic and then sends an notification to a telegram bot. It’s quite obvious to me this kind of application is quite suitable to run as Lambda, and that’s where I decided to deploy my application to once it is working well locally. It turned out that I had to solve several issues I encountered. Here I share how I solved those issues, so you don’t have to scratch your head when encounter them.

Attempt 1: Deploy the application through the web interface.

For my first attempt on deploying the application, my goal is to make things as simple as possible.
Therefore, I chose to use the web interface. From the web interface, there’s an option to upload zip file and that’s where I began.

Problem: Compile GO in an static way

This problem happens quite often from what I see on the internet. The main issue with here is that, some of the libraries in GO uses a feature called CGO which means using C code in GO, and when this feature is in use, GO compiler will try to create dynamic binary.

To solve this problem, it is often as simple as compiled the code to a statically linked binary. Do note that, some of the code that’s compiling use GCC was not working, this is because often times the GLIBC library is higher than the ones used in AWS lambda environment, at least that was the case for me (I am on an Linux laptop with Manjaro Linux).

I was able to find something called musl-gcc and then used it in my compilation

1
2
build:
CC="musl-gcc" go build --ldflags '-linkmode external -extldflags "-static"' ./main.go

This proves to be working fine, once I complied the binary, zip and upload it to lambda through the interface, everything seems to be working.

Attempt 2: Deploy the application through AWS SAM

Often, it is not efficient to manually upload the code using a zip file through every time, that’s why I started to thinking about introducing SAM as a tool to simplified the process of deployment. This was when I encountered the second issue.

Problem: Asking SAM to compile the GO program in an static way.

As the line above says, SAM always compiled the code in a dynamic way which is why the binary fails to work again even locally using the command sam invoke local.

Now it’s the time to tell SAM I don’t want dynamically linked binaries. As a matter of fact, none of the article available online has a direct answer to my question, fortunately, I did find an AWS documentation on using custom runtime. Based on this article, a GO program that wants to utilize static linking can have the following template:

1
2
3
4
5
6
7
8
9
10
11
Resources:
HelloGOFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: HelloGO
Handler: main
Runtime: go1.x
MemorySize: 512
CodeUri: .
Metadata:
BuildMethod: makefile

And in the MakeFile, the following corresponding entries need to be added:

1
2
3
build-HelloGOFunction:
CC="musl-gcc" go build --ldflags '-linkmode external -extldflags "-static"' ./main.go
cp ./main $(ARTIFACTS_DIR)

Doing so will make sure compiled binary are statically linked and works on lambda when bundle and uploaded.

Notes:

  • WRT1200AC (this router) contains 2 partitions
  • Flashing the firmware through the Luci interface actually flashes the new firmware to the inactive partition

Steps:

  • Download the bin file needed to upgrade the package
  • Create a list of all the installed packages on current version using opkg list-installed | cut -f 1 -d ' ' > /root/installed_packages.txt
  • Choose one of the following methods to flash:
    • Flash the file from the Luci interface
      OR
    • Download the file to the /tmp and then flash using sysupgrade /tmp/*.bin
  • After the flash and reboot, you will be in the partition that you wasn’t on before the flash, it will have all of your previous configs, but the extroot will not be there.
  • Hopefully, you will already have internet access at this point, if not, go ahead and setup internet.
  • Once your internet is up, you will need to run some commands to install packages needed to setup:
    • First, install packages that are necessary to setup extroot:
      opkg update && opkg install block-mount kmod-fs-ext4 kmod-usb-storage e2fsprogs kmod-usb-ohci kmod-usb-uhci fdisk
    • In my case I use f2fs for my extroot, which means I need extra packages, like mkf2fs to format the flash.
    • Now, format the device you plan to use for extroot, in my case, I ran mkf2fs /dev/sda1 cause the sda2 was used as swap.
    • At this point, copy the overlay to the newly formatted drive
      mkdir -p /tmp/introot mkdir -p /tmp/extroot mount --bind / /tmp/introot mount /dev/sda1 /tmp/extroot tar -C /tmp/introot -cvf - . | tar -C /tmp/extroot -xf - umount /tmp/introot umount /tmp/extroot
    • Regenerate fstab using block detect > /etc/config/fstab;
    • Reboot
    • You should have a working Openwrt with extroot now. Change opkg/distfeeds.conf to the corresponding upgraded version.
    • Now run opkg upgrade $(opkg list-upgradable | awk '($1 !~ "^kmod|Multiple") {print $1}') to keep base packages up-to-date.
    • And install all your backed up packages using cat /root/installed_packages.txt|xargs opkg install

Because I don’t use dnsmasq, this means once the steps above finishes, I will need to do some extra post installation steps

Post installation (More as a personal note):

  • Remove odhcpd-ipv6only package and install odhcpd, this will ensure IPv4 dhcp functionality, otherwise, there will only be ipv6 addresses allocated.

Like it or not, 2020 has been a year where video conference are used a lot. For me, most of the meetings happens in Zoom. Finding the link to the meeting in the calendar and then clicking on it to join the meeting had gradually become a new norm and something I really don’t like (the fact that after clicking on the Zoom link brings up your browser instead of Zoom itself, and prompting you to click again to open Zoom is really a pain). As someone who would like to automate things as much as possible, I did eventually find a solution that works for me albeit several third party tools are required.

Problem Statement: Automatically join a Zoom call for a meeting scheduled in calendar without user interaction (on MacOS X).

Prerequisite:

  • Alfred
    (Unclear if you need to be a paid user to create custom workflows, the author is a paid user)
  • zoom-calendar.alfredworkflow
    (Yep, I found this alfred workflow by chance and based my work and this blog on this workflow, it is very handy and I would really like thank the author for creating this.)
  • Automator
    (The builtin automation app in MacOS from Apple)

Solution:

Assume you have already installed the Alfred App, you will need to go to this Github Repo, follow the instructions given and install the Alfred workflow.

Once the workflow has been installed, we need to do some tweaking. Add an external trigger to this workflow and give it an id of ‘Automator’

Now, open up Automator, choose the Calendar Alarm workflow as shown in the screenshot below:

Copy and paste the following code to the calendar alarm:

Now comes tricky part. You would first need to export your calendar from the cloud, export from Google Calendar Website or whatever calendar you are using.

Then open up your Calendar app, and create a new local calendar, give it whatever name you want, in my case, I simply named it Automator. At this point, you can import the ical file exported from above.

These two steps are necessary if you want to use the automation for most of your events. If there are only a few events that you would like to add the automation on, you can just use the copy function in the Calendar app and paste to the local calendar. In any case, a new local calendar is necessary otherwise the alarm trigger would not work.

Once you completed setting up your local calendars, you can start adding the file trigger which will help you open up the Zoom. To do this, you need to modify the event of your choice and then change the alert settings, change it to custom and choose ‘Open file’ option, and then change the dropdown from ‘Calendar’ to ‘Other…’.

Normally, the file you created with the Calendar alarm will be saved to ~/Library/Workflows/Applications/Calendar so go ahead to find that folder and choose that file.

At this point, you will have a working version of the calendar automation for this event, if you want it on more events, you will need to repeat the steps of changing alerts for each of the other events.

Future improvements & Alternatives

I have to admit the solution I described above is not perfect, and it requires some steps to setup, still once I set it up, everything works fine for me, and I would never need to remember to join a Zoom meeting because of this automation.

Some future improvement and/or caveats that I found about this method is that:

  • The events must have the zoom link somewhere (either description or location) for this automation to work.
  • If there were two back-to-back meetings, the automation will fail, this is because the previous meeting hasn’t finished yet, and the given Alfred workflow will still list it at the top. I haven’t found a good solution to this.

There are several alternative ways I can think of:

  • Use Zoom itself, if you are logged into Zoom and allow them to access your calendar, they will provide a join button in their app to allow you to join the meeting without more button clicks.

  • Bookmark the Zoom url schemes and click on it. This is basically how the workflow works behind the scene: converting the url from http to zoom url scheme and then open it. I won’t go in depth on how to create a bookmark and convert the links to url schemes, but Zoom provide a great doc on their schemes here.

As a developer, you will sometimes face weird problems, it is important to come up with reliable and repeatable ways to solve this problems, so when such problems come up again, you would be able to find a solution easier. As for myself, one of the tools that I found most useful on Unix-like system is jq, which is a tool to process json files. Let me demonstrate how I use this tool to solve some problems I encountered during work.

Problem: Convert a JSON file to CSV

Example JSON

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
},
{
"title": "This is a song",
"artist": "This is an artist",
"album": "This is an album",
"year": 1989
}
]

JQ code to generate csv:

1
jq -r '(.[0] | keys_unsorted) as $keys | ([$keys] + map([.[ $keys[] ]])) [] | @csv'

Resulting CSV:

1
2
3
4
"title","artist","album","year"
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989
"This is a song","This is an artist","This is an album",1989

Problem: Aggregate JSON object.

Example JSON:

1
2
3
4
5
{
"A": [{ "Name": "A1" }, { "Name": "A2" }],
"B": [{ "Name": "B1" }, { "Name": "B2" }],
"C": [{ "Name": "C" }]
}

The goal is to produce something like below:

1
{ "A": ["A1", "A2"], "B": ["B1", "B2"], "C": ["C"] }

It transform the object and aggregate (or compress?) them by “Name” property, I know this can be easily done with JavaScript, but jq and bash seems like more widely used and will come in handy when JavaScript is not available.

The jq code I came up with is as follows:

1
jq '[keys_unsorted[] as $k|{($k): [.[$k][]|.Name]}]|add'

References:

What would be the easiest way to remove all but one word from the end of each line in Vim?

Today, I found a challenge on vimgolf and thought it might be interesting to share my solutions.

Here is the original file

1
2
3
4
abcd 1 erfg 7721231
acd 2 erfgasd 324321
acd 3 erfgsdasd 23432
abcd 5 erfgdsad 123556

Here is the desired result:

1
2
3
4
7721231
324321
23432
123556

The challenge is quite straightforward, delete all but the last characters in the file. I found several ways to tackle this challenge, so let me show you all:

Take 1: Use macro only

Record a macro, and play it back, so the keystrokes would be

1
2
3
4
5
6
7
8
qa $bd0j q 4@a <cr>

qa starts recording macro to register a
$ move cursor to the end of the line
d0 means *d*elete to the beginning of the line
j move cursor down
q finish macro
4@a repeat the macro in register a 4 times

Take 2: Use regex and %norm

It’s quite obvious that all we want to keep from the original files are the numbers. So the regex would be simple to come up with, like something as simple as /\d\+$<cr> will do. Once you type this into vim, all the numbers at the end of the line will be highlighted. Next you can do:

1
2
3
4
5
:%norm dn <cr>

% means applying to the whole file,
norm means in execute following command in normal mode
dn means *d*elete to *n*ext match

Take 3: No regex pure %norm

This is the fastest way I can come up with, still, not as fast as the top answers on Vimgolf but it is decent in my opinion. Being slightly different from the option above, it is still using %norm though:

1
2
3
4
5
%norm $bd0 <cr>  

% means applying to the whole file,
$ move cursor to the end of the line
d0 means *d*elete to the beginning of the line

Takeaways:

  • norm is a quite powerful, and can be used to achieve complex stuff that can be otherwise achieved through a macro.
  • d the delete command is useful in many unexpected ways, besides dn and d0 command mentioned above which deletes to the next match and deletes to the beginning of the line respectively. An additional useful variation of d command is d4/x where 4/x meaning 4th occurrence of x.

This week, I was tasked with creating basic infrastructure for one of our newwebsites. We use Fastly for our CDN and New Relic as a log aggregation tool, most of our infrastructure is setup using Terraform, which is a popular infrastructure as code (IaC) platform. Terraform supports Fastly through a custom plugin, which also has the capability of New Relic. We need to customize the format of New Relic so that we can find the logs easily in New Relic. As expected, the plugin actually provide a pretty straight forward to accomplish this – what you need to do is to come up with a proper json string that align with the format that Fastly provided in their document. This seemingly straightforward task ended up took me some time to debug.

In terraform, you can create objects, and an object can be converted to string by using jsonencode function. This is where the pitfall comes in: Fastly’s formatting document said they have a option called “%>s” which represents the final status of the fastly request. From my perspective, it would definitely be helpful if we can include this when we send the log to Fastly. So I added this to my formatting object, and then jsonencode the object. To my suprise, I get an error saying that Fastly failed to parse the formatting option I created, which is quite strange. I then started to debug, I export TF_LOG=”DEBUG”, which asked Terraform to give me all the debug logs. To my suprise, I found that the “%>s” was jsonencode to “%\u003es” by terraform, and thus causing the error. How come “>” sign is encoded in Terraform? It turned out it’s for backward compatibility with ealier edition of Terraform. And according to their documentation

When encoding strings, this function escapes some characters using Unicode escape sequences: replacing <, >, &, U+2028, and U+2029 with \u003c, \u003e, \u0026, \u2028, and \u2029. This is to preserve compatibility with Terraform 0.11 behavior.

they encode much more characters for you without you realizing it. I really hope they can provide a way to opt-out of this feature or make this an opt-in, it will make things way easier for developers.

Recently, I have been eager to get the data saver on Android to work automatically for me. My goal is to have Tasker automatically enable the built-in data saver when my data is low. Before I came up with this idea, I have search the web to see if there is any solutions exist already so I don’t have to reinventing the wheel. Unfortunately, I couldn’t find anything on this topic so I have to create my own solution. Upon 1 week of testing, things to work as I expected so I decided to share in this article how I did it.

Prerequisite

  • An rooted Android phone – Unfortunately, the solution I came up with only works on rooted Android phone, I’m running Android 9.0.
  • A way to get to know you current data usage – Some carriers allow you to get you data usage through SMS, some might only allow you to use USSD to get data usage. I am only covering the case of using SMS here (my case).

How-to

The data saver in Android is basically something that limits all the background data usage. After some digging on the Internet, I found that you can turn the data saver on with the following command cmd netpolicy set restrict-background true (requires root). Then it is pretty easy to have the data saver turned on automatically when the data is low. My way of doing this is as follows:

  • Set a global variable as threshold for triggering the data saver.
  • Send an SMS to my carrier every morning at 8 AM, parsing the SMS to get my remaining data.
  • If my data remaining is lower than the threshold then turn on the data saver whenever mobile data is in use. This is done by using the run shell action and run the command mentioned above.
  • Turn data saver off automatically when I am connected to WiFi (Not necessary but add it just in case).

It is quite easy to do this when you carrier allows you to query data usage using SMS, with USSD however, things is not that easy and unfortunately, I haven’t figure out a way yet.