Wednesday, 14 March 2018

Hacking through my brandless Chinese Smartwatch

Recently, my cousin gave me a smartwatch he had got from his friend who went to China (to order smartwatches for their company staffs) and he got some sample smartwatches being manufactured.
So I got one of those sample smartwatches (which doesn't even have a brand name on it).

As usual, my hobby of hacking things struck me up and thought of looking into the modifiable stuffs in it. It seem to have an Android wear-like firmware (maybe a minimalistic one)

I thought of starting by accessing it via ADB (similar to that of Android phones), but it didn't have that feature. I connected the smartwatch to Compuer via USB. It showed me two modes: Mass Storage Mode & COM Port. (ADB didn't work on any of those modes; lol yes it's obvious)

Since it had a COM Port, I realized serial communication is possible with the flash in the SoC, and thought how to start finding ways to start with, or atleast find out a few details of the smartwatch.

Then I thought like what if there are secret codes from where I can get information, just like all mobile phones.

So I opened up the dialer and entered the standard secret code to see IMEI: *#06#
And kaboom! It worked. So I googled "Chinese Smarwatch secret codes"and came across this page:
LIST OF DZ09 SMARTWATCH SECRET CODES

I saw a few secret codes there.
Initially, I tried: *#00000000#

It returned 4 options like this "Game Center, SSC Info, QQ, WeChat".

I clicked 'SSC Info', and it returned the following details:


MTK Soft Ver:0x1303
MTK HW Ver:Unknown
Ver:0x74
UsrId:0
Os:MTK60D
OsVern:
Model:QW_MJW_M10_MB_V
Company:F066
Width:0,Height:0
MaxRam:645120
Kbd:0
TouchScreen:1
Cap:0x40010
Macro:
FAE: liujun
Build Date: 20180108
Build Time: 2018/01/08 21:03


From this information, I thought maybe I can search something like "MTK60D FAE: liujun" to find about the smartwatch or related devices.

It took me to this link: DZ09 Smartwatch - XDA Forums

I saw a smartwatch there, similar to mine (but not the same).
It seems that there are local MTK-based Chinese Smartwatches (as of the time of writing) which are clones of a smartwatch called DZ-09, and the clones might be using the MTK processor MT6261DA.

(Also, though the boot logo of the smartwatch reads 'Android', it does not run on Android OS; it runs Nucleus RTOS it seems, Mediatek's proprietary OS ;) )

A guy over there had suggested to try the secret code to know more details about the smartwatch: *#8375#
I tried it and it returned:


[VERSION]
QW_MJW_M10_MB_V2.1_COB_CST016SE_GMSA_A1_EU_IPS_20180108
[BRANCH]: 11CW1352MP GPLUS61A_11C_NX9
BUILD: BUILD_NO
SERIAL#:
[BUILD TIME] 2018/01/08 21:03
[MRE VERSION] 3100
HAL_VERNO: 


Then I decided to flash firmwares from my computer onto the smartwatch. So I had to take backup of the current FW first. There was also a thread which helps in backing up the firmware.
Universal ReadBack Extractor for MTK feature watchphones

It uses a tool like the famous MTK's SP Flash tool to create the readback file which generates the ROM's configuration. I followed the instructions there, and I had to choose an appropriate scatter_config. There was a link to a collection of firmwares, from there, I had to choose any random firmware, and try them out one by one to see which one matches my device's config. I chose a FW of a device running 6261D, named '-XDA DZ09 mtk6261 from AerogamingHD.rar' (just a random try), and loaded the scatter_config from that FW into the tool and generated the readback file from the NOR flash.
Using that readback file, I was able to dump the entire firmware in my device by using another tool in the same thread.

Now that I have a backup of my stock FW, I can try other firmwares too from similar devices, so even if I brick or something goes wrong, I can flash it back 😀
(It reminds me of my days years ago (probably 8 years ago, much before Android became famous), flashing Symbian CFWs on my Nokia 5230 S60v5 device and running around bricking it xD )

Will try out new firmwares on my smartwatch and update the experience in this thread soon.

Edit:
Found another secret code to enter Engineer mode: *#993646633# 
But doesn't seem to provide much options.😭

And also, it seems the 4MiB flash ROM has its partitions in some compressed format. I thought it could it could be squashfs and checked, but it's not :( It uses some proprietary compression algorithm.

I also found many other secret codes from the ROM file that I dumped (by using HexEditor) as suggested in the article above.

Edit 2:
I flashed multiple firmwares on it, none of them seemed to work completely.
Touch screen was not working with most FWs. It worked with a few FWs, but the screen was inverted and the colors were inverted.

I've asked a question to an experienced guy about how things work, waiting for a reply.

Monday, 12 March 2018

Introduction to Distributed Tensorflow and sharing any tensorflow object between sessions across different processes

I'm currently working on Reinforcement Learning, where there's an algorithm called A3C, where I need to have global network, and there will be parallel workers running in different processes which needs to update the global network.

Using just tf.Session(), we can't access the nodes (ops or tensors or variables or whatever) from another tensorflow's session running in a different process.

Enter Distributed Tensorflow. Anything is possible using the power of Distributed TF.

Here's a very friendly article on Distributed TF:
Distributed TensorFlow: A Gentle Introduction

It's highly recommended that you read it before proceeding (atleast skim through it), as my post will be additions on top of it for making things easy.

Everything is easy when explained with an example. I'll be explaining based on the following cluster configuration:

Here, we have 2 jobs, namely 'worker' (the worker that'll update the global network) and 'ps' (let's call it the parameter server, which contains the global network). The workers can run tasks on four different servers (as can be seen from the cluster config), and there's a single server for the global network. (You can design it as you see fit for your application)

We create a cluster object by:
cluster = tf.train.ClusterSpec(jobs)

So, we can create our network in the parameter server (or the current program). But how do you reference it from other workers, so as to access the shared tensors or variables or ops?

For example, consider the following scenario:


If you refer the tensorflow docs, you can see the return type of any call. For instance, it can be found that the variables var, state, conv1, and train_op are respectively of types 'tf.Variable', 'tf.Tensor', 'tf.Tensor' and 'tf.Operation'.

Also, note that these are created in the server "/job:ps/task:0" only from where you can access it.

" Okay, how do refer/access it from other processes?"

Each object created above has a member named 'name', which can be access like var.name or state.name or conv1.name or train_op.name. It returns a string telling you the full reference name of the object along with the variable_scopes (if used). You may print it out and check the result. For instance, conv1.name will return something like 'Conv/conv/Elu:0'

You need to store those names so that you can refer it from anywhere.

I have created a simple class to do that (You may extend it however you want):


The code is self-explanatory I guess.

So you can now create an object that has all such reference names.
And you can add all your reference names to the object like:




Now that you have the objects' references that can be shared, you can start your workers.

You can spawn processes from a single program, by using the multiprocessing module (Or you should know the way around your things)





Wondering what finish_counter is? Will get to that soon..


Let's see how workers can be implemented. Say you have a function like this that each worker process executes:


If you had read the article that I had linked above, if you haven't noticed, there's no way to decide when to terminate all these processes, since the server.join() call (after the processing is complete) blocks the process. (And server just can't be killed like that, other tasks may depend on that server)

So I thought of creating a shared class (with shared memory) which acts as a counter, telling the no. of servers that has finished processing, and is waiting to be killed.

To create an shared object:

(This is the finish_counter that was passed when processes were created, remember? ;) )

So, the end of the worker_function() should look like:



The main program's ending should look like:




That's it. You've now implemented Distributed TF with a parameter server and multiple workers.
You can extend this solution according to your application needs.