Skip to content
Snippets Groups Projects
Commit ff8774dc authored by erbel's avatar erbel
Browse files

Adjusted Readme

parent 904c6e0f
No related branches found
No related tags found
No related merge requests found
Pipeline #88751 passed
# MOCCI # MOCCI
[![build status](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/badges/master/pipeline.svg)](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/commits/master) [![build status](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/badges/master/pipeline.svg)](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/commits/master)
MOCCI is an extension for the [Open Cloud Computing Interface (OCCI)](http://occi-wg.org/about/specification/) to enable a model-driven management of monitoring devices in the cloud, as well as storing their results within an OCCI-compliant runtime model. Together with other tools from the OCCI ecosystem it provides a testing and execution environment for self-adaptive systems. This tooling also allows to access the runtime model, infused with monitoring results, over the browser as JSON which is internally maintained on an EMF-basis. In the following you will find a getting started guide in which a preconfigured virtualbox image is downloaded to perform example scenarios and an tutorial on how to enrich existing OCCI models with monitoring functionality. Moreover, an introduction to MOCCI's components is provided, as well as links and description on how to integrate MOCCI with other pre-existing tooling from the OCCI ecosystem. The paper submitted to this artifact and the virtualbox image can be found [here](https://owncloud.gwdg.de/index.php/s/5u2ddnyyNlzecM5) with the password bein mocci. MOCCI is an extension for the [Open Cloud Computing Interface (OCCI)](http://occi-wg.org/about/specification/) to enable a model-driven management of monitoring devices in the cloud, as well as storing their results within an OCCI-compliant runtime model. Together with other tools from the OCCI ecosystem it provides a testing and execution environment for self-adaptive systems. This tooling also allows obtaining a snapshot of the EMF-based architecture runtime model in the (JSON format)[./doc/browser.png]. In the following you will find a getting started guide in which a preconfigured virtualbox image is downloaded to perform example scenarios and an tutorial on how to enrich existing OCCI models with monitoring functionality. Moreover, an introduction to MOCCI's components is provided, as well as links and description on how to integrate MOCCI with other pre-existing tooling from the OCCI ecosystem. The paper submitted to this artifact and the virtualbox image can be found [here](https://owncloud.gwdg.de/index.php/s/5u2ddnyyNlzecM5) with the password being mocci.
## Getting Started ## Getting Started
......
doc/adjustedTree.jpg

128 KiB

doc/browser.png

1.88 MiB

...@@ -4,7 +4,7 @@ In this scenario a hadoop cluster with one worker node is deployed and scaled ac ...@@ -4,7 +4,7 @@ In this scenario a hadoop cluster with one worker node is deployed and scaled ac
Therefore, a MAPE-k loop is initialized that periodically checks the CPU utilization of all worker nodes. Therefore, a MAPE-k loop is initialized that periodically checks the CPU utilization of all worker nodes.
Thereafter, it is decided whether additional worker nodes are required (scaleUp) or not (scaleDown). Thereafter, it is decided whether additional worker nodes are required (scaleUp) or not (scaleDown).
When a scaleUp is performed, a compute node hosting a hadoop-worker component is added to the model, When a scaleUp is performed, a compute node hosting a hadoop-worker component is added to the model,
which gets executed over a [models at runtime engine](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.docci). which gets executed over [DOCCI](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.docci) a models at runtime engine.
## Starting the MAPE-K loop ## Starting the MAPE-K loop
...@@ -12,29 +12,66 @@ Before the MAPE-K loop is started make sure that the MartServer is running and t ...@@ -12,29 +12,66 @@ Before the MAPE-K loop is started make sure that the MartServer is running and t
In this scenario, a Java application is started that utilizes the schematized data format of the OCCI metamodel and its extensions. In this scenario, a Java application is started that utilizes the schematized data format of the OCCI metamodel and its extensions.
This scenario serves as an example on how to work with the OCCI ecosystem, including the monitoring extension and the models at runtime engine. This scenario serves as an example on how to work with the OCCI ecosystem, including the monitoring extension and the models at runtime engine.
To start the MAPE-K loop execute MAPE.java as a Java Application. To start the MAPE-K loop execute MAPE.java as a Java Application.
In the VM it is located on the desktop. Open a new terminal and navigate to the desktop. You can start the loop by executing the following command: In the VM it is located on the desktop. Open a new terminal(Ctrl-Alt-T) and navigate to the desktop(cd Desktop). You can start the loop by executing the following command:
``` ```
java -jar MAPE.jar java -jar MAPE.jar
``` ```
If you want to get more information about the requests performed by the models at runtime engine you can alternatively start the MAPE_Exec_Info.jar.
### Mape-k loop - Output
The output of the script is separated into the individual steps of the MAPE loop: Monitor, Analyze, Plan, and Execute.
Thus, the output looks depending on the adaptive action to be performed similar to:
```
Starting MAPE loop
--------------------Waiting for new MAPE-K Cycle: Sleeping 10000--------------------
Monitor: Monitored CPUs: 1| Critical CPUs: 1| None CPUs: 0
Analyze: Critical State Detected
Plan: upScale!
Adding Compute Node to Model
Ip in Hadoop Network: 10.254.1.12
Adding Worker Component to Model
Adding Sensor to Model
Adding Datagatherer to Model
Adding Dataprocessor to Model
Adding Resultprovider to Model
Execute: Deploying adjusted Model
```
In this case we queried for VMs with Critical and None CPU utilization. We detected only one being Critical. Thus, a critical state is detected for which an upscaling of the cluster is planned. Hereby, a compute node is added to the hadoop cluster with the ip 10.254.1.12, as well as the worker component hosted on this VM. Moreover, a Sensor including its monitoring devices are added to the model. These are responsible to monitor the newly added worker in the cluster. Before the changes get executed by putting the adjusted model into DOCCI, a model transformation is performed on it to add provider specific information to the model, e.g., an attachment of the new VM to the management network.
If more than one worker node is currently active in the cluster and a minimum of one has a CPU utilization of None the downscaling removes the VM with None utilization from the cluster. To investigate the expected behavior of this self-adaptive control loop you can check an example log [here]. This log also includes all REST requests performed against the OCCI interface.
*Note*: If you want to get the same information as in the full log, including requests performed against the OCCI interface, you can alternatively start the MAPE_Exec_Info.jar.
### MartServer - Output
During the execution of the MAPE-K loop sensors are added and released to the running cloud deployment. Thus, different amounts of monitorableproperties are filled with values by the dummy connector. An excerpt can be found beneath. Moreover, it logs output about how each single request adjusting the runtime model are handled.
```
INFO MonProp: CPU, set to: Low(ba16f4ee-1601-4192-a259-eae4274aed72)
INFO MonProp: CPU, set to: High(e0053a21-7349-4918-bb1d-ecfffb2d4efb)
INFO MonProp: CPU, set to: None(ba16f4ee-1601-4192-a259-eae4274aed72)
```
### Browser - Output
Again you can check the amount of deployed resources by using your browser. Depending on whether the system up or downscales different amounts of worker nodes are present in the runtime model. Again you can check the amount of deployed resources by using your browser. Depending on whether the system up or downscales different amounts of worker nodes are present in the runtime model.
Additionally, on an upscale the MartServer should log multiple simulated monitoring results. The amount of currently deployed sensors, applications, components, compute nodes, as well as their monitorableproperty can be accessed by the following example queries. Hereby, the information depicted can be updated by **refreshing the browser**.
The compute nodes as well as their monitorableproperty can be accessed by the following example queries:
``` ```
http://localhost:8080/compute/ http://localhost:8080/compute/
http://localhost:8080/sensor/
http://localhost:8080/monitorableproperty/ http://localhost:8080/monitorableproperty/
http://localhost:8080/application/
http://localhost:8080/component/
``` ```
*Note:* To perform this scenario in a running cloud multiple adjustments have to be performed. In the code itself only the kind of connector has to be adjusted from LocalhostConnector to MartConnector.Please . Moreover, an MartServer without actual connectors has to be created. We used the connectors located [here](../src/test/resources/martserver-plugins/live). Please refer to this [documentation](doc/openstack) to get started with actual cloud deployments.
## Tuning the Scenario ## Tuning the Scenario
Again the simulated monitoring results can be adjusted as described in Section "Tuning the Scenario" of the [first scenario](doc/vertical.md). Again the simulated monitoring results can be adjusted. Therefore, follow the steps defined the documentation of the [dummy connector](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/tree/master/de.ugoe.cs.rwm.mocci.connector.dummy).
Moreover, to investigate what is happening in this scenario it is recommended to open the OCCI-Studio and execute the scenario from here. Moreover, the behaviour of the models at runtime engine can be investigated in more detail by adjusting specific Logger settings in the RegistryAndLoggerSetup.java file. Therefore, open the OCCI-Studio and execute the scenario from here.
The class is located at: The class is located at:
``` ```
de.ugoe.cs.rwm.mocci/src/main/java/MAPE.java de.ugoe.cs.rwm.mocci/src/main/java/MAPE.java
``` ```
To log specific information, the logger setup, found in RegistryAndLoggerSetup.java, can be manipulated.
Especially, the Executor logger is interesting as it shows the performed REST requests against the OCCI API.
*Note:* These can be copy and pasted in order to perform manual requests using the terminal. E.g.: curl -v -X PUT ... *Note:* To perform this scenario in a running cloud multiple adjustments have to be performed. In the code itself only the kind of connector has to be adjusted from LocalhostConnector to MartConnector. Moreover, an MartServer without actual connectors has to be created. We used the connectors located [here](../src/test/resources/martserver-plugins/live) and the ansible roles located [here](../src/test/resources/roles). Please refer to this [documentation](doc/openstack) to get started with actual cloud deployments.
\ No newline at end of file
This diff is collapsed.
# Sensor Creation # Sensor Creation
In this scenario the hadoop cluster model gets enhanced to monitor Memory utilization in addition to CPU utilization of worker nodes. In this scenario the hadoop cluster model gets enhanced to monitor Memory utilization in addition to CPU utilization of worker nodes.
Therefore, a tree editor in OCCI-Studio is used as described in the following.
## Enhancing the Model for Simulation Purposes ## Enhancing the Model for Simulation Purposes
The following description shows how to adapt the initial deployment model using an editor within OCCIWare Studio: The following description shows how to adapt the initial deployment model using an editor within OCCIWare Studio:
1. Start OCCIWare Studio. ### Starting OCCI-Studio
2. Create a new Sirius Modeling Project 1. Open the folder OCCI-Studio located on the Desktop.
3. Copy the src/main/resources/hadoopClusterCPU.occic file from the MOCCI project into the new project 2. Double click on OCCI-Studio to open the IDE.
4. Open the file with the OCCI Model Editor 3. Now you should see a checked out version of MOCCI.
5. Expand the first item, as well as the Configuration it contains
6. Right click on Configuration->New Child->Resource ### Creating a new Modeling Project
1. Right click on the Project Explorer
2. Choose New->Project...
3. Search for Modeling and choose Sirius Modeling Project
4. Press Next and name the project MemMonitoring
5. Copy the src/main/resources/hadoopClusterCPU.occic file from the MOCCI project into the MemMonitoring project
6. Rename the file in your project to hadoopClusterCPUandMem.occic
### Utilizing the Tree Editor
1. Right click on hadoopClusterCPUandMem.occic
2. Choose Open With->OCCI Model Editor
3. Expand the first item, as well as the Configuration it contains
4. Now you can see the hadoop cluster model used for the initial deployment as shown below
![Tree](./tree.jpg "Tree")
### Adding a Memory Sensor to the Model
1. Right click on Configuration->New Child->Resource
1. Open the properties view and set the kind of the new resource to Sensor 1. Open the properties view and set the kind of the new resource to Sensor
2. Right click on the Sensor to add a new link 2. Right click on the Sensor->New Child->Link
1. Set the kind of the link to Monitorableproperty 1. Set the kind of the new link to MonitorableProperty
2. Set the target of the link to the compute node it should monitor 2. Set the target of the link to the compute node it should monitor
3. Add a new attribute to the Monitorableproperty link 3. Right click on the MonitorableProperty->New Child-> Attribute State
1. Name the attribute monitoring.property 1. Name the attribute monitoring.property
2. Set the value of the attribute to Mem 2. Set the value of the attribute to Mem
7. Add two new resources to the Configuration 2. Right click on Configuration->New Child->Resource
1. Set the kind of one resource to ResultProvider 1. Set the kind of one resource to ResultProvider
2. Set the kind of the other resource to DataGatherer 3. Right click on Configuration->New Child->Resource
8. Create two ComponentLink within the Sensor 1. Set the kind of the other resource to DataGatherer
1. Link one to the ResultProvider 4. Create two links within the Resource representing the Sensor recently created: right click New Child->Link
2. Link the other to the DataGatherer 1. Set the kind of both to ComponentLink
9. Save your changed 2. Set the target of one link to the Resource representing the ResultProvider
3. Set the target of the other link to the Resource representing the DataGatherer
5. Save your changes to the model. It should look similar to the model shown below:
![AdjustedTree](./adjustedTree.jpg "AdjustedTree")
Even though the model currently does not of placementlinks required for an actual deployment, it is sufficient to perform the simulation. ### Deploying the adjusted Model
Even though the model currently does not consists of placementlinks for the new monitoring sensor which is required for an deployment in the cloud, it is sufficient to perform the simulation.
To test the enhanced model follow the enumerated instructions: To test the enhanced model follow the enumerated instructions:
1. Copy the enhanced model to the desktop 1. Copy the hadoopClusterCPUandMem.occic file to the desktop
2. Open a terminal and navigate to the desktop 2. Open a terminal (Ctrl-Alt-T) and navigate to the desktop (cd Desktop)
3. Start the Mart Server 3. Start the Mart Server (./startMart.sh)
4. Open another terminal and navigate to the desktop 4. Open another terminal (Ctrl-Alt-T) and navigate to the desktop (cd Desktop)
5. Execute the following command 5. Execute the following command
``` ```
java -jar initialDeployment.jar hadoopClusterCPU.occic java -jar initialDeployment.jar hadoopClusterCPUandMem.occic
``` ```
This command puts your adjusted model into the models at runtime engine. This command puts your adjusted model into the models at runtime engine which is also used for the intial deployment.
To start the added sensor execute the following REST request. To start the added sensor execute the following REST request.
The string **sensorid has to be replaced** with the id of the Resource representing the Sensor you added.
For example, curl can be used by starting another terminal. For example, curl can be used by starting another terminal.
``` ```
curl -v -X POST http://localhost:8080/sensor/sensorid/?action=undeploy -H 'Content-Type: text/occi' -H 'Category: undeploy; scheme="http://schemas.modmacao.org/occi/platform/application/action#"; class="action"' curl -v -X POST http://localhost:8080/sensor/sensorid/?action=start -H 'Content-Type: text/occi' -H 'Category: start; scheme="http://schemas.modmacao.org/occi/platform/application/action#"; class="action"'
``` ```
Finally, the added Sensors monitorable property should be filled with simulated monitoring data. Finally, the added Sensors monitorable property should be filled with simulated monitoring data.
This can be checked by investigating the model through the browser or by having a look at the output of the MartServer.
However, this time the simulated data corresponds to the ones that are simulated for Mem. However, this time the simulated data corresponds to the ones that are simulated for Mem.
To adjust the simulation follow the instructions given in the tuning the scenario section of the [vertical scaling scenario](./vertical.md). To adjust the simulation follow the instructions given in the [dummy connector](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/tree/master/de.ugoe.cs.rwm.mocci.connector.dummy)
### Visualizing the Model
To visualize the model in an graphical editor the following steps have to be performed:
1. Double click on representation.aird
2. Press Add... in the Models section
3. Choose Browse Workspace...
4. Select MemMonitoring->hadoopClusterCPUandMem.occic
5. Now the Model and its possible representations should been loaded (transparent)
6. Under Representations doubleclick on OCCI Configuratrion diagram
7. Next choose Configuration diagram (0) (non-transparent)
8. Select the Configuration element and press Finish
To reduce the size of the model visualization collapse the User_Data and SSH information of each Compute node as these are rather large strings.
Thereafter, press arrange all in the editor.
Now the cloud deployment is ready to be explored.
## Adjusting the Model for an actual deployment. ## Adjusting the Model for an actual deployment.
To make the model ready for an actual deployment placementlinks in each component of the sensor has to be added. To make the model ready for an actual deployment placementlinks in each component of the sensor has to be added.
Moreover, the individual components have to be tagged with user mixins refering to their configuration management scripts. Moreover, the individual components have to be tagged with user mixins refering to their configuration management scripts.
Please refer to the [connector documentation](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/tree/master/de.ugoe.cs.rwm.mocci.connector) of the monitoring extension for more information. Please refer to the [connector documentation](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/tree/master/de.ugoe.cs.rwm.mocci.connector) of the monitoring extension for more information.
The model for a concrete deployment of a memory monitoring sensor, including its configuration management script, can be found [here](../src/main/resources/hadoopClusterNewExtWithMem.occic). The model for a concrete deployment of a memory monitoring sensor, including its configuration management script, can be found [here](../src/main/resources/hadoopClusterNewExtWithMem.occic).
\ No newline at end of file
## Visualizing the Model
To visualize the model the following steps can be performed.
In the sirius modeling project doubleclick on the representations.aird file and
double click on Configuration. It is recommended to collapse all elements and perform a reordering of them making the model more accessible.
\ No newline at end of file
doc/tree.jpg

122 KiB

...@@ -4,7 +4,7 @@ Thereafter, a MAPE-k loop is initialized that periodically checks whether the CP ...@@ -4,7 +4,7 @@ Thereafter, a MAPE-k loop is initialized that periodically checks whether the CP
If that is the case a request against the OCCI API is performed, increasing the number of cores and memory available to the machine. If that is the case a request against the OCCI API is performed, increasing the number of cores and memory available to the machine.
This scenario serves as an example showing how to directly work with the OCCI interface only requiring the execution of REST requests. This scenario serves as an example showing how to directly work with the OCCI interface only requiring the execution of REST requests.
## Starting the Adaptation Script ## Starting the Adaptation Loop Script
In this scenario, a simple bash script is used to check the gathered monitoring data and perform corresponding actions. In this scenario, a simple bash script is used to check the gathered monitoring data and perform corresponding actions.
Before the adaptation script is started make sure that the MartServer is running and the hadoop cluster has been deployed. Before the adaptation script is started make sure that the MartServer is running and the hadoop cluster has been deployed.
To start the script execute the [vertical.sh](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/blob/master/src/main/resources/vertical.sh) script. To start the script execute the [vertical.sh](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/blob/master/src/main/resources/vertical.sh) script.
......
...@@ -13,6 +13,7 @@ package de.ugoe.cs.rwm.mocci; ...@@ -13,6 +13,7 @@ package de.ugoe.cs.rwm.mocci;
import java.nio.file.Path; import java.nio.file.Path;
import java.util.List; import java.util.List;
import org.eclipse.cmf.occi.core.AttributeState;
import org.eclipse.cmf.occi.core.Configuration; import org.eclipse.cmf.occi.core.Configuration;
import org.eclipse.cmf.occi.core.Link; import org.eclipse.cmf.occi.core.Link;
import org.eclipse.cmf.occi.core.MixinBase; import org.eclipse.cmf.occi.core.MixinBase;
...@@ -117,6 +118,14 @@ public class DownScaler extends AbsScaler { ...@@ -117,6 +118,14 @@ public class DownScaler extends AbsScaler {
for(Link link: comp.getLinks()) { for(Link link: comp.getLinks()) {
if(link instanceof Networkinterface) { if(link instanceof Networkinterface) {
Networkinterface nwi = (Networkinterface) link; Networkinterface nwi = (Networkinterface) link;
for(AttributeState attr: nwi.getAttributes()) {
if(attr.getName().equals("occi.networkinterface.address")) {
if(attr.getValue().startsWith("10.254.1")){
interfaces.add(attr.getValue());
}
}
}
/*
for(MixinBase mixB: nwi.getParts()) { for(MixinBase mixB: nwi.getParts()) {
if(mixB instanceof Ipnetworkinterface) { if(mixB instanceof Ipnetworkinterface) {
Ipnetworkinterface ipnwi = (Ipnetworkinterface) mixB; Ipnetworkinterface ipnwi = (Ipnetworkinterface) mixB;
...@@ -125,6 +134,7 @@ public class DownScaler extends AbsScaler { ...@@ -125,6 +134,7 @@ public class DownScaler extends AbsScaler {
} }
} }
} }
*/
} }
} }
} }
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment