IoT from low to high level, from Microcontrollers to Servers: An Experience

Author by Leo Ling

Background

At the time of writing this blog, I am wrapping up my second week of my internship. I have been moving across the layers of abstraction for computational devices starting from my solid-state electrical engineering background to management of servers where the hardware itself is abstracted away with Concurrency. Usually, projects are generally restricted to a few layers of abstraction, but I have found IoT to be one of these projects where you work across many layers. To this point, I remember a presentation by Jeehwan Kim, a material science professor at MIT. Part of his work focuses on the growth of transferable, 2D films allowing for flexible and thin electronics. Kim proposes one application of these material as a novel way of building high performance thin film electronic devices by stacking individual layers of films. Through stacking of these 2D films, Kim puts forth a possibility to produce heterointegrated, flexible IoT systems. Even at the material science level of electronics, there has been interest in incorporating IoT. 

To me, the interest in IoT is related to increasing strain on Moore's law, it is not feasible to expect ever increasing device density and computational power at lower prices. So, instead of increasing computational power of individual devices, IoT tries to offload the computational workload to a centralized server. In addition, writing algorithms for one centralized device is usually easier than trying to write algorithms for multiple smaller devices to converge to one solution. This combined with our internet infrastructure presents a compelling reason to connect devices to the internet.

Working with Microprocessors

Small sensors are usually built with cost in mind. Thus, overhead is usually minimal and user are expected to interface with them using low level programming languages. Working with these sensors and the Arduino microprocessors to interface with them is where the bulk of my C and C++ experiences originated. The simplest sensors are simple analog or binary outputs. More complex sensors might employ communication protocols such as SPI, I2C, or CAN.

Most board will require a Wi-Fi module to interface with the internet. Some Arduino have enough computational resources to initiate HTTPS or MPPT connections, which ensures a level of security when communication with the internet at large. At the scale of microprocessors, encryption computation costs become significant. While Arduino usually has libraries to handle these application layer protocols, more barebone microcontrollers will require design of the underlying protocols.

Security-wise, we must worry about the microprocessor and its connection to the network. Communication between the microprocessor and sensor could only be obfuscated with manufacturer coordination and would only slow down the microprocessor. If an attacker is able access the physical link between your sensors and the microprocessors, you have bigger problems than just compromised information. The bigger threat is accessing these microprocessors remotely.

Working with Servers

After setting up the microcontrollers and connecting it to the internet. You have to setup where the information is sent to. Previously, I had relied on services like Google Firebase, where the implementation is handled for you. You send REST requests to a server and it magically updates some value that you could access from the internet. Now, I am beginning to see some of the backend server implementation that make up this nebulous cloud. Thankfully, a lot of the physical server management is abstracted away by various services: Microsoft Azure among them. 

I am beginning to understand the features of cloud computing that make it attractive. The ability to scale up and down processing power is particularly significant when you are selling devices that will connect back to your network. Expanding physical servers can be a costly process, so using VM machines has become more and more popular for deployment and consistent performance. Depending on the design of the server backend, you could even take advantage of using containers which requires even less overhead. If nothing else, individual companies can now worry less about keeping personal data of client secure on premise. Instead, data is stored in data centers that already is already focused and up to date on security.

For me, it is interesting to see the other side of the picture. To get some idea of what is going on behind the server when you upload to it.

Tags in this Article