Taking into consideration BR 1, BR 2, BR 4, and BR 14 one of the collective goal is to set up a visualization and monitoring service for the project. In this scenario, the System Under Management would be the environment station with all the different sensors that it encompasses. The set of sensors that would be a part of the environment station are Temperature and humidity sensor, air pressure sensor, air quality sensor, UV intensity sensor, and light intensity sensor. An example from the project for Environment station is shown in Figure below.
In monitoring analytics, the data from the environment station would be passed via the gateway to the AWS Kinesis streams service which would be responsible for transporting the collected data to the AWS broker on the backend. This would then lead to the triggering of an AWS lambda function which would duplicate the data and send it to two separate kinesis streams. One path would be responsible for real-time analytics, whereas the other path would be responsible for storing of the data for predictive and query analytics, in either scenario the processed data would be passed on to either Amazon Quicksight or Grafana for visualization and other trend related predictions.
Referring to the figure above, from the monitoring analytics portion of the page, the process of transport of information from the devices to the AWS environment would be through the Kinesis service. The predictive processing would happen in the “processing section” of the model, where the lambda function would be used for storing the data in the Amazon DynamoDB from where the trend analysis would be processed.
An example of the predictive analysis would be to use the air quality sensors deployed in the house. If the network of sensors deployed in the house is taken into consideration for predicting whether a particular room has a higher concentration of a particular gas (Carbon Monoxide, Carbon Dioxide, Ammonia, Nitrogen Dioxide etc.) This would not include checking the current values against a threshold but predicting whether the rate at which a gas’ concentration is increasing would it breach the threshold levels and if so predict this information for the user to see.
In the acting analytics portion of the project, the analytics being processed by AWS Lambda function would result in signals being sent back to the devices in the network of the home. These devices would be subscribed to the topics to which the lambda function publishes. The information being transferred would be transported via the Kinesis service again back to the devices in the network. The devices that would receive these signals would be the one with actuators integrated with them, for example, the irrigation system, smart switches, and smart lights.
In the figure below, we outline a simple AWS architecture for the project. This gives an example as to how the data would reach the cloud for processing. In this example, we take into consideration the Environment station which would be sending values to the AWS environment.
The Environment Station would be sending data to the AWS environment with the help of the AWS Kinesis Service. Once in the AWS environment, the AWS Lambda Function would be used to send the data across two different processes — processing data for real-time analytics, and for predictive analysis. For real-time analytics, the data would be stored in AWS S3 buckets, and also sent to Amazon Kinesis Analytics for monitoring analytics. Once the analytics is processed, the processed data would be sent to either Grafana or Amazon QuickSight. For predictive analytics, the data collected would be stored in Amazon DynamoDB for query-based requests. Once the required data is collected, it would be sent to an AWS Machine Learning service for further processing for predictive purposes.
Here, we describe the two sets of computation that would take place in the project based on the use case of the device. The first set of computation would primarily happen at the edge, whereas the next set of computation would take place in the cloud.
While choosing the cloud service that would be optimal for the project we had a look at several Service Level Agreements of AWS IoT Core, Google Cloud IoT Core, Azure IoT Hub to make an informed decision. While considering the server downtimes, each of the cloud services provided an equal amount of credit refunds for similar percentage of downtime, the only difference was the maximum credit, which was offered by Google at 50% for less than 99% uptime.
We were attracted at the wide range of the services offered by AWS’ environment as compared to that provided by Google and Azure. Also, the pricing for IoT Core services4 was very competitive and was found to be more cost effective than the other two.