to provide adaptive application deployment platforms for the tenants and to cater reliable accessibility to the tenant-side clients.
In order to achieve high QoS for the fog servers, the providers need to address the following aspects.
1.5.4.1 Physical Placement
The physical placement represents where the providers should deploy their fog servers. Commonly, in the case of iFog, the provider may enable fog servers on all the possible nodes (e.g. cellular base stations) and rely on the underlying communication technologies (see Section 1.4) to support the accessibility. On the other hand, in case of mFog [24, 63], providers need to identify the best geo-location to place the mobile fog nodes in order to provide the best QoS to the end-devices and also to support the cost-efficiency of the operation. For example, in UAV-Fog, the provider may choose the locations for the mobile fog nodes based on the density of the end-devices, the signal coverage of the fog node, the distance between the fog server and the end-devices, and the other context factors described in the previous content related to context-awareness. In general, the primary goal of physical placement is to achieve the lowest latency in terms of request/response time, application service handover time, and application task migration time.
1.5.4.2 Server Discoverability and Connectivity
Server discoverability is a specific requirement for the multitenancy fog services, and it involves two phases.
Multitenancy fog service provider discovery. Presents the phase when the tenants intend to discover feasible fog service providers for deploying their applications. Commonly, based on the experience of the cloud service business model, it is likely that the tenants would discover the providers via the indexing services (e.g. Google searching). Alternatively, the providers may establish a federated service registry for the service discovery. Furthermore, the provider may follow the open standard-based service description mechanism or interface (e.g. ETSI – MEC standard) to describe their fog services toward helping the tenants to discover the service that matches to their requirements.
Runtime fog server discovery. Presents the runtime service discovery phase for the fog applications. In general, the fog applications hosted on the fog servers, need to perform seamless interaction with the end-devices on the move. Besides the mobility schemes that help the tenants to identify the movement of the end-devices, tenants need a corresponding mechanism that can help the end-devices to continuously discover and to connect to the new fog servers automatically, without any inference from the end-users. Therefore, the fog servers need to support the corresponding API that allows the tenants to configure the application process/task handover and migration mechanism among the fog nodes. Commonly, if such an API support is not available, tenants have to enable the corresponding mechanisms from the higher layer of the application, which may result in an inefficient tenancy cost and operational performance.
1.5.4.3 Operation Management
In general, fog servers have limited resources in which they can serve a fairly limited number of tenant-side applications at each time slot. Therefore, fog servers require dynamic and optimal mechanisms to support their serviceability. Here, we list the basic elements involved in the operation management of fog servers.
Load balancing of request and traffic. Commonly, fog servers are connecting with one another vertically or horizontally. Therefore, it is possible to establish a cluster computing group among the fog nodes connected in 0-hop range toward enhancing the overall computational capability. Besides the computation-related loads, since fog nodes are fundamentally Internet gateway devices, the heavy network traffic can always affect their serviceability. In order to overcome the traffic-related issues of fog servers, the provider may configure multilayered caching mechanism that utilizes the fog nodes in the hierarchy to reduce the burden [66].
Server allocation, server scheduling, and server migration. Three corelated elements, especially for the resource constraint mobile fog node (constraint mFog), such as UE-fog and UAV-fog nodes. To explain, a provider may deploy a specific type of fog server on the constraint mFog device for a domain-specific application in a specific period of time. Whereas, the rest of the time, the device does not operate the fog server at all. For example, in an indie fog environment [17], the owner of a smartphone (UE)-fog may configure the device to serve the context reasoning–based fog server [36] only when the owner is carrying the device in outdoor areas and the battery level of the device is over 50%. Further, the owner can also configure that, when the battery level of the device is between 51 and 70%, it will redirect/migrate the request to another authorized fog node. Similarly, the notion described here is applicable to other MFC domains, such as LV-fog [24] and UAV-fog [63].
1.5.4.4 Operation Cost
Operating fog servers can be costly for the providers especially when the providers are unable to identify what tenants really demand. For example, stream data filtering is a common method used in fog computing and the corresponding program can be quite simple in comparison to the scientific programs operated on the cloud. However, in the classic approach, the tenant may need to wrap the simple program to a package that runs on the resource-intensive VM environment because the provider was following the classic cloud service deployment approach to providing the fog server. Explicitly, the provider has inefficiently increased the burden of the fog node while providing excess service to tenants who demanded only a simple method. Therefore, the provider should consider what are the most cost-efficient service types that should be supported by the fog servers. Beside the service type, the providers of the battery-powered mobile fog nodes need to specifically address the energy-efficient of the fog servers in order to improve their sustainability. For instance, although providers can easily replace UAV-Fog nodes, considering the extra latency derived from the process/task handover and migration while replacing the UAV-Fog nodes, frequently replacing UAV-Fog nodes will reduce the QoE for the tenant-side clients.
1.5.5 Security
In large-scale MFC systems, the classic perimeter-based security approach will not suffice, security strategies in MFC must account for various factors: physical, end-to-end, and also monitoring and management.
1.5.5.1 Physical Security
Since the fog nodes will be deployed in the wild (e.g. road-side infrastructure in LV-Fog), physical exposure is a more serious threat than in conventional enterprise or cloud computing. The devices need antitamper mechanisms that prevent, detect, and respond to intrusions, while simultaneously considering how to allow maintenance operations without compromising these mechanisms [61].
1.5.5.2 End-to-End Security
End-to-end security is concerned with the security capabilities of each device within the MFC, spanning different layers of the fog architecture and devices therein.
Execution environments. The devices need to include capable software and hardware components solely dedicated to performing security functions (so-called roots of trust (RoT). These components should, on one hand, be isolated from the rest of the platform while also verifying the functions performed by the platform.Based on RoT-s, the nodes must have the capability to provide trusted execution environments. In the case of virtualized environments, this can be achieved through virtual trusted platform modules.
Network security. According to the OpenFog security requirements, fog nodes should provide the security services defined by the ITU X.800 recommendation by using standard-based secure transport protocols.Some nodes in the MFC system can provide security services on the network through network function virtualization (NFV) and SDN, for example, deep packet inspection.
Data Security protection of data must be taken care of in all the mediums in which data may lie or move: in system memory, in persistent storage or data exchanged over the network.