Skip to content
Snippets Groups Projects

Vertical Scaling Scenario

Prerequisite for this scenario is the initial deployment of the hadoop cluster. Thereafter, a MAPE-k loop is initialized that periodically checks whether the CPU utilization of the worker node reaches a critical level. If that is the case a request against the OCCI API is performed, increasing the number of cores and memory available to the machine. This scenario serves as an example showing how to directly work with the OCCI interface only requiring the execution of REST requests.

Starting the Adaptation Loop Script

In this scenario, a simple bash script is used to check the gathered monitoring data and perform corresponding actions. Before the adaptation script is started make sure that the MartServer is running and the hadoop cluster has been deployed. To start the script execute the vertical.sh script. In the getting started VM the script is located on the desktop. Create a terminal (Ctrl-Alt-T) navigate to the desktop (cd Desktop) and start the script with the following command:

./vertical.sh

Note: To perform other scenarios it is recommended to stop the self-adaptation loop of this scenario. Therefore, press Ctrl-C in the terminal running the loop while it is waiting for a new cycle.

Adaptation Loop Script - Output

The output of the script is separated into the individual steps of the MAPE loop: Monitor, Analyze, Plan, and Execute. Thus, the output looks depending on the adaptive action to be performed similar to:

Starting MAPE script
Requesting http://localhost:8080/monitorableproperty?attribute=monitoring.result&value=Critical every 3 seconds!
Monitor
{
  "id" : "urn:uuid:ba16f4ee-1601-4192-a259-eae4274aed72",
  "kind" : "http://schemas.ugoe.cs.rwm/monitoring#monitorableproperty",
  "mixins" : [ ],
  "attributes" : {
    "monitoring.property" : "CPU",
    "monitoring.result" : "Critical"
  },
  "actions" : [ ],
  "location" : "/monitorableproperty/urn:uuid:ba16f4ee-1601-4192-a259-eae4274aed72",
  "source" : {
    "location" : "/sensor/urn:uuid:efb0f50a-7a7c-4153-b939-4846d6554dbb",
    "kind" : "http://schemas.ugoe.cs.rwm/monitoring#sensor"
  },
  "target" : {
    "location" : "/compute/urn:uuid:2e6a73d0-faaa-476a-bd25-ca461dd166cf",
    "kind" : "http://schemas.ogf.org/occi/infrastructure#compute"
  }
}
Analyze
Critical Compute Detected
Plan: Scale up VM
State: DownScaled
Execute

In this case the query for VMs detects a Critical CPU utilization resulting in the "Scale up VM" plan. As the current state of the VM which is stored by the script is set to DownScaled a REST request adjusting the amount of cores of the VM is executed. Hereby, the occi.compute.cores attribute is updated from 2 to 8 cores. If the VM has currently has 8 cores and a downscale plan is executed the amount of cores is set to 2.

Independent of which plan gets executed the REST response is printed in the terminal executing the script. The response shows the complete information about the updated VM, including its Links, which is rather large. An example log file from the execution of the script can be found here.

Browser - Output

Again, the browser can be used to query the running OCCI model. As the self-adaptation script adjusts the amount of cores of the worker node in the hadoop cluster, the compute node can be directly queries to gain information about its current state. Therefore, investigate the occi.compute.core attribute of the worker node using the following query. The updated values can be checked by refreshing the browser:

http://localhost:8080/compute/urn:uuid:2e6a73d0-faaa-476a-bd25-ca461dd166cf/

Tuning the Scenario

As the adaptation only upscales on a Critical behavior, it may be interesting to tune the simulated monitoring results. Therefore, follow the steps defined the documentation of the dummy connector.

Note: This scenario mainly serves to get started with the OCCI API. Currently, there is no connector implementing the vertical adjustment as shown in this scenario.