AI-Embedded Flexible Energy Grid: implementation deep dive
2023-12-19 20:9:28 Author: blogs.sap.com(查看原文) 阅读量:9 收藏

This blog is the continuation of the previous blog post where we have explained how SAP BTP can support the current transformation the Utilities sector is undergoing, by implementing AI solutions to enhance communication between energy providers and prosumers. Here we will move from the theory to the practice and we will discuss how the components of the solution can be implemented.

For the comprehension of the content in both the blog posts, it would be good if you were already familiar with some SAP products, in particular SAP BTP platform and SAP AI Core, and if you have already some knowledge about machine learning and data science and also a basic knowledge about the Internet of Things.

Please, note that these blog posts are part of a series of technical enablement sessions on SAP BTP for Industries. Check the full calendar here to watch the recordings of past sessions and register for the upcoming ones! The replay of the session related to these blog posts and all the other sessions is available here.

Authors: Alice Magnani, Jacob Tan, Cesare Calabria

Implementation Deep Dive

In the previous blog post we explained how AI can help in the context of the energy and utilities sector, solving one of its most critical challenges, namely coordination between prosumers and energy operators. As explained, the full solution could include in principle a series of artificial intelligence implementations (see Fig. 1) to predict energy supply and demand and to optimize energy distribution and the use of household appliances on the prosumer side. In this use case we will not talk about all the pieces mentioned above, but we will focus on forecasting prosumer energy demand for the next day. So this blog will be dedicated to discussing the technical implementation in SAP BTP of this small but crucial piece in more detail.

Figure%201%3A%20aaaaa

Figure 1: Solution architecture highlighting in-scope (green) and out-of-scope (violet) components.

Artificial Intelligence with IoT

Let’s focus on how Artificial Intelligence can be integrated in an IoT architecture. As already mentioned in the first blog post, there are two possible strategies: the IoT traditional architecture or the IoT Edge architecture. This is true not only in our energy grid scenario, but for any IoT application that requires artificial intelligence.

Let’s start analyzing the traditional approach first (Fig. 2). In this case we have sensors that take measurements and communicate all these measurements to the cloud, to an IoT hub, a message hub that handles securely the communication with the sensors. From the IoT Hub, the data collected can be stored in a data storage unit and this data can be used to develop a machine learning model. The trained model can be used to inference new fresh data collected from the sensors and make predictions. The result of these predictions can then be consumed by any downstream application.

Figure 2: AI with IoT, high level view of the traditional architecture.

The traditional IoT architecture works well in a lot of scenarios, but it has some limitations. For instance, it might be challenging, costly and sometimes meaningless to store in the cloud all the data collected from the sensors. Also, the traditional architecture doesn’t work well in scenarios where there is no stable internet connection to ensure the sensors can stream continuously their data to the cloud. Moreover, it is not suitable in scenarios when one needs to react very quickly to the results of the inferencing, because inferencing in the cloud brings some kind of latency. Lastly, when the sensors collect sensitive data, streaming them to the cloud poses an issue of data privacy, as, for instance, in the case of our granular smart meter data.

In scenarios where the limitations mentioned above become relevant, one can consider the edge architecture (Fig. 3). In this case, we assume to have a data storage in the cloud collecting historical data that can be used to develop a machine learning model. The trained model, instead of being used for inferencing on new data in the cloud, can be injected to the edge. The sensors are part of an IoT device having computing capabilities, so that the model can be used for inferencing directly in the device. The result of the inferencing, or any relevant data collected from the sensors can be communicated to the cloud, to the IoT Hub, and from there they can be persisted into the data storage or consumed by any necessary applications.

Figure%203%3A%20AI%20with%20IoT%2C%20high%20level%20view%20of%20the%20edge%20architecture.

Figure 3: AI with IoT, high level view of the edge architecture.

Let’s see how these two options look like in our energy grid scenario where we want to implement an energy demand forecast. With the traditional architecture (Fig. 4), smart meters at the edge can stream energy consumption measurement to SAP Cloud for Energy (C4E), which acts as an IoT hub to collect these readings. We can use SAP AI Core to develop our forecasting model, to inference with new data, and to send the results, for instance, to the energy planner application.

Figure 4: Energy demand forecast with the traditional architecture.

Now let’s have a look at the edge architecture (Fig. 5). In this case we need to assume to have a database with some historical readings of smart meter. For instance, we can imagine that a sample of prosumers had agreed to share their data with the energy provider for the purpose of developing a machine learning model. Then we can use AI Core to train our model on these data. The trained model can be injected into the smart home device. In this version of the architecture, we need to borrow the IoT Hub from a third party IoT platform, for instance Azure, AWS or Google Cloud. From the IoT Hub, we can consume the result of the inferencing in BTP.

Figure%205%3A%20Energy%20demand%20forecast%20with%20the%20edge%20architecture.

Figure 5: Energy demand forecast with the edge architecture.

For our prototype, we want to leverage the high granularity of the smart meters. To do that, looking in more detail at the sketch above, you can see that besides the IoT Hub, we need to consume also a couple of more additional third-party services.

Let’s introduce these services. First of all, the training model and the code to inference this training model needs to be wrapped in a Docker image. This Docker image needs to be uploaded into a Docker registry in the cloud and the IoT Hub will then take it from the Docker registry and inject it into the IoT device where this Docker container can run to produce forecast. For security reasons, we need to use the Docker container registry from the same platform as the IoT Hub.

Then when the home device produces the energy demand forecast, it communicates it back to the cloud, to the IoT Hub, which, for security reasons, cannot be reached directly from BTP. We need first to route the data coming from the IoT Hub to third service into the IoT platform. For instance, it can be a cloud storage service. From there, BTP can take these data and persist them in a data storage or this data can be consumed by an energy planner application. So, these are the three services that we used in our prototype. We implemented it with Microsoft Azure, so we are using the Azure Container Registry, the Azure IoT Hub and the Azure BLOB storage. Also, Microsoft Azure is the platform that we are using to handle the runtime inside the smart home device through Azure IoT runtime.

Model development

Let’s now talk about the sample model that we have built for producing the prosumer energy demand forecasting in SAP AI Core. Whenever we talk about machine learning model development, everything always starts from the data. For our example, we have used a public dataset called AMPds2: The Almanac of Minutely Power dataset.

There are two input data sources that we imagine in this scenario (Fig. 6). The first one is obviously the data coming from the smart meters. Below you can see a screenshot of how these raw data might look like for one prosumer. Each column represents the energy consumption of a particular appliance. The second data source that we use to improve our prediction is weather data, for example in our case, we have used hourly data for temperature and humidity close to our prosumer place.

Figure%206%3A%20Raw%20data%20for%20the%20energy%20demand%20forecast%20model.

Figure 6: Raw data for the energy demand forecast model.

For developing our model, we need to elaborate these data through a data preparation stage. The goal is to extract some predictors, that is to say, variables that are meaningful to guess what is going to be the energy demand for our prosumers. Below (Fig. 7) you can see a screenshot of the data set that we have created and we are feeding to our machine learning model. You can see there are different kind of predictors:

  • we have a first group of predictors that summarize what has been the energy consumption of the prosumer in the last days or weeks in a quite aggregated way, such as the total electrical load of the prosumer house in the last hour, in the last two hours, etc., in the last day or in the last week;
  • then we have another group of predictors which describe the activities that has been going on in the prosumer house in the last days or weeks, such as number of times the prosumer has activated the washing machine in the last 24 hours, or the dishwasher, etc. , or how often he has been switching on the heat pump in the last days or weeks, and so on and so forth. These predictors are calculated from the granular smart meter reading and they are sensitive information. The need to avoid sharing these data with the energy provider in the cloud is what justifies our choice to explore the edge architecture;
  • we have also predictors extracting from the weather data, for instance, the average temperature in the last days or weeks, the humidity for the last days or weeks;

Then the last columns in our data set are the targets variables. They represent the energy consumption of our prosumer throughout the next day, and they are what we want to forecast with our model. Since we want to to predict the energy consumption with hourly granularity, we have 24 target variables.

Figure%207%3A%20Data%20preparation%20for%20the%20energy%20demand%20forecast%20model.

Figure 7: Data preparation for the energy demand forecast model.

The tool that we use to develop our machine learning model is SAP AI Core. Let’s refer to Fig. 8 for a very quick overview of the solution. SAP AI Core is a runtime to develop and deploy machine learning applications at scale. It comes with a user interface that is called SAP AI Launchpad and it is based on Kubernetes under the hood.

Figure%2014%3A%20AI%20Core%20workflow.

Figure 8: AI Core workflow.

There are two objects that one can create and run in SAP AI Core: executions and deployments. We talk about executions whenever we have some code that we want to execute in our cloud runtime. Typically, executions are used to train machine learning models, but they can be used also for other purposes. For instance, you can build an execution for the data preparation stages or also to send or retrieve messages to and from the smart home patch devices as we will see later.

We create deployments, instead, when we want to spin up a server that keeps listening for input data, typically for fresh data that needs to be inferenced by a trained machine learning model. We will not cover going to use deployments in SAP AI Core here in our use case because our model is going to be deployed in the edge device.

In any case, whatever we want to run in SAP AI Core, the code needs to be prepared and wrapped into a Docker image that needs to be uploaded into a Docker registry. Any input data should be uploaded into a cloud object store. We have been using in our case Microsoft Azure Blob Storage, but there are also other choices. The object store in the cloud is also used by SAP AI Core to store any output of our code. Additionally, we need to have a GitHub repository that is connected to SAP AI Core that contains some “templates”. Templates are used to instruct AI Core about how we want to exactly to run our code. These templates synchronize with AI Core and produces our executions or deployments.

In the video linked below you can see how we have built the sample model for energy demand forecasting that we have trained in AI Core. For more detailed information on how SAP AI Core works, and step-by-step guides and tutorials, you can refer to the links at the end of this blog post.

Model deployment at the edge

Now that we’ve seen how to prepare an energy demand forecasting model, we are ready to delve into the complexities of deploying an energy demand forecasting model directly to edge devices connected to the Azure IoT platform. To do that we have made use of the following tools:

  • Visual Studio Code;
  • an Azure IoT Hub, the central hub for managing IoT devices;
  • Azure Container Registry, a secure repository for storing docker container images.

For the deployment of the energy demand forecast model at the edge we need to develop at least two modules:

  1. the “predictor module”, a Flask application that serves as the gateway to the demand forecasting model;
  2. the “transporter module” that takes charge of managing incoming messages and interacting with the model through the endpoint made created with the deployment of the predictor.

Figure%20X%3A%20aaaaa

Figure 9: Workflow of the model deployment at the edge.

Both the two modules will be run in the edge run-time as Docker containers, so they need first to be converted into Docker images that will be made available through the Azure Container Registry (Fig. 9).

Module development can be done within Visual Studio Code where you can take advantage of a specific Azure extension to create an IoT template project. From VS you can also perform the creation and uploading of Docker images.

The actual deployment procedure can also be started for each device connected to the IoT hub from Visual Studio Code thanks to the Azure extension already mentioned (NB: in real projects where the number of IoT Hubs and devices can be very high, this procedure needs also be automated). If you want to know more about the deployment procedure you can check the video linked below.

Cloud 2 Edge communication

In the previous section we’ve seen how to deploy our machine learning model for energy demand forecast on each device, on each smart meter. Now we need to understand how we can establish a communication between the cloud and these edge devices. This is essential because we need to provide the data for the inference step, and we need also to retrieve the output of the inferencing and bring it back to the cloud for further processing. Let’s start first from the cloud to edge communication (Fig. ).

What we need to do is to write a pipeline that is able to retrieve the needed data from some external data sources like an external weather service. This pipeline should also be designed to retrieve, for instance, technical specifications from some tables stored in Datasphere. Moreover, the pipeline should also prepare this data and ship it to the IoT Hub we have created in the IoT platform. From there the message with all this data will then be dispatched to all the devices attached to the IoT Hub.

Figure%2010%3A%20C2E%20data%20flow.

Figure 10: C2E data flow.

We can develop this pipeline in Python, and one we are done what we need is a place where we can run it. One possible option is to use the AI Core runtime we already use to train our forecast algorithm. It’s already available in the architecture, so there is no need to introduce other products. There we are allowed to run our code written in the programming language of our choice because AI Core executes any piece of code in the form of a container, being an abstraction for Kubernetes. You can have a look at the code of this cloud to device pipeline in the video linked below.

But what happens to the data once it lands on the devices? As you remember, on every device we have the transporter module running. This module is responsible for handling the weather forecast data and also to trigger the inference step. The video linked below shows the lines of code that execute these operations in the transporter module.

The next video shows how to run the entire cloud 2 edge data flow depicted in Fig. 10.

Edge 2 Cloud communication

We’ve seen how to send the needed data for the inference step at edge and how this data is used for for the inference. Now what we need to see is how to implement the reverse communication, because we need to retrieve the output of the inference step, that is our energy demand forecasts from all the smart meters attached to the IoT apps. We need to bring back this data to the cloud for further processing.

So let’s see how we can implement this data flow (Fig. 11). We can rely again on the the transporter module that can be used also to prepare the output and send it back to the IoT Hub. And then in the IoT platform we can set up a routing and save all the energy demand forecasts in the object store that in our case is a BLOB storage since we decided to use the Azure platform for our POC.

Figure%2016%3A%20E2C%20data%20flow.

Figure 11: E2C data flow.

The routing process saves energy demand forecasts as a JSON file. So the data is already in the cloud, but if we want to bring it back to BTP we have to develop an additional pipeline. What we need to do is write another piece of code, for example again in Python, which we will run again in the AI ​​Core runtime. This pipeline will connect to the Azure BLOB storage, will download the JSON files for the current day and it will prepare and save the data in Datasphere. Once we have done that, this data can be consumed easily in SAP Analytics Cloud. The video linked below shows the lines of code in the transporter module that are responsible of sending back the energy demand forecast to the IoT Hub and explains how we can set up the routing in the Azure IoT platform.

In the video below you can also take a look at the Python code of our edge to cloud pipeline that will run in AI Core.

To complete the implementation deep dive, the last video linked below explains how to run the e2c pipeline in AI Core.

Conclusions

In this blog post we learned how to develop and train one of the AI ​​algorithms needed to build a smart energy grid in SAP BTP. Furthermore, we have demonstrated that SAP BTP can be integrated with any third-party IoT platform. There can be many options for achieving this integration in BTP depending on your needs, but for our prototype we decided to showcase perhaps the simplest option by drawing on the versatility of the AI ​​Core runtime where we run a couple of simple pipelines.

What we have explained with this specific and real use case can be an inspiration for many other scenarios, especially when you need to integrate SAP BTP with an IoT platform. The prototype we developed can be found in the GitHub repository we have created for our program. The material is highly reusable, so we hope it can be a helpful starting point for your future developments

Links

Utilities industry

More about BTP integration with third-party IoT platforms

SAP AI Core and AI Launchpad

SAP Datasphere

SAP Analytics Cloud

SAP Cloud for Energy

Others


文章来源: https://blogs.sap.com/2023/12/19/ai-embedded-flexible-energy-grid-implementation-deep-dive/
如有侵权请联系:admin#unsafe.sh