Previously I have described how modern applications are being developed. In short, Docker is about building individual containers while Kubernetes is about managing and orchestrating large numbers of them. Microservices are packaged into individual containers and orchestrated into applications (ex. business workflow, business logic). What the exact workflow is, depends on the business logic the enterprise needs to execute.
Docker provides a container registry that enables enterprises to accelerate application development, as many existing functions/services can be reused. The order in which functions/services are being orchestrated can be controlled by another container that executes the top level business logic. An example of this would be if a user is logging in to access an application via a web browser. These days, the user is represented by a container to the backend that authenticates and authorizes the user for services to be presented to the user. The orchestration and instantiation of the service(s) is done in real time.
Why is that interesting to us?
When we connect to Facebook, LinkedIn, or any other web application, we have an avatar/digital twin that is representing us to the virtual backend and we communicate to the web application through such a construct.
Recently I had a conversation with people interested in robots as a service (RaaS) and I heard an interesting statement: “Until you work on robot automation, you don’t know how many problems arise that we humans solve on the fly without conscious thinking.”
Robots need lots of input and need to be under full time control (can be autonomous, but still full time control). Robots can get lost due to connectivity issues, not knowing how to get home, they can stop working when they don’t have the data about what to do, or the data is stale/outdated, etc. In short, humans adapt quickly to the changed context, and machines do not. When the system is mobile, the context is potentially changing all the time, so the data must be constantly updated.
How do we solve this problem?
Well, the Web 2.0 SW (containerized) architecture showed us how to solve it for humans (see the first paragraph), but for machines it is a bit different. If the website responds a bit slower or doesn’t load (changing context), humans will adapt (wait a bit longer before the next decision). Although this sounds simple, it is not for machines.
Here is where the avatar/digital twin comes into play: Representing the machine to the business logic, but also representing the business logic to the machine. The avatar/digital twin receives the data from the machine and its sensors and it receives the data from the business logic in the cloud. The same way we use the display and text keyboard for communication with cloud applications, machines use the avatar/digital twin as an M2M interface.
It is important to keep in mind that machines are sensitive to data arriving in time. If it comes too late, the data is stale. You can’t send the whole data set back and forth since it takes too long, from a throughput and latency perspective. We need the avatar/digital twin to be placed precisely where it meets the application requirements.
When a machine comes online, it connects to the enterprise network. Based on the authentication and authorization, it connects to the business workflow the machine has to execute. If the machine is wired, the connectivity is reliable, but if the machine is mobile, it requires wireless connection and the problem arises. Some devices are wired, and some are mobile and require wireless communication. Context is changing often.
The cloud application can calculate the right logic for the machine to execute, but it is still questionable if it will arrive on time. This is where the digital twin and the edge come in. Every industrial device will have a digital twin. The industrial device communicates via digital twin to the cloud application and vice versa. The global context is calculated by the far cloud. The local decisions are made by the avatar/digital twin based on the input from the global context (god eye view) and local conditions collected by sensors on the machine. The avatar/digital twin is highly dependent on the connectivity, both mobile (to the device) and wired (to the cloud). And this makes the Alef Edge platform the perfect place where the connectivity and compute converge.
Our real time deterministic mobile connectivity is the foundation for the computer that hosts the avatar/digital twin, which connects to the cloud. It sends and receives data on/to both ends, and calculates in time the instruction data set for the machine to execute, as well as the updates to the far end to update the global context. The business logic can be changed, but in most cases, it will not require changing the full data set. Sending updates to be interpreted based on local conditions is much faster from a throughput and compute perspective, than sending the whole datasets back and forth.