Skip to content
Snippets Groups Projects
Commit 13a1d061 authored by erbel's avatar erbel
Browse files

Updated documentation

parent e0e516e4
No related branches found
No related tags found
1 merge request!4Doc
Pipeline #87817 passed
# Vertical Scaling Scenario
In this scenario a hadoop cluster with one worker node getting monitored is deployed.
As soon as the CPU utilization of the worker node reaches a critical level, a request against the OCCI
API is performed, increasing the number of cores and memory available to the machine.
## Instructions
In order to provide an easy use of the MART Server, a docker container embedding the server and all required plugins is provided.
However, to use specialized plugins currently the docker image has to be recreated. Thus, a brief explanation of the single steps are given in the following:
1. Install [docker](https://docs.docker.com/install/)
2. Clone the [MOCCI repository](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci)
3. Optional: Add martserver-plugins and roles to be used by the server. Adjust the authorized_keys file for ssh access.
4. Navigate to src/test/resources/
5. Create docker image: sudo docker build -t mart-server .
6. Test the docker image: sudo docker run -p 8080:8080 -p 22:22 mart-server
7. Store the docker image: sudo docker save mart-server $>$ mart-server.tar
8. To access the container you can use an ssh connection: ssh -i \$key root@localhost
To build this container a fatjar of the MartServer is used. To use newer versions please refer to the [documentation of the MartServer](https://github.com/occiware/MartServer/blob/master/doc/server.md) in how to create a docker container.
## Loading a Docker Container
To initialize the proposed OCCI extensions, the following plugins need to be added to the OCCI-Studio.
These allow to correctly depict OCCI models in the textual and graphical editor. To Install plugins the following steps have
to be performed:
1. Download/Navigate to the archive containing the docker image
2. Load the docker image: docker load $<$ mart-server.tar
3. Start the image: sudo docker run -p 8080:8080 -p 22:22 mart-server
4. Start with bash: sudo docker run -p 8080:8080 -p 22:22 -i -t mart-server /bin/bash
5. To access the container you can use an ssh connection: ssh -i \$key root@localhost
## Configuring the MartServer to be used in OpenStack
[Documentation on how to setup and configure the MartServer for an OpenStack Cloud](doc/openstack.md)
\ No newline at end of file
# Horizontal Scaling Scenario
In this scenario a hadoop cluster with one worker node is deployed and scaled according to gathered CPU utilization.
Therefore, a MAPE-k loop is initialized that periodically checks the CPU utilization of all worker nodes.
Thereafter, it is decided whether additional worker nodes are required (scaleUp) or not (scaleDown).
When a scaleUp is performed, a compute node hosting a hadoop-worker component is added to the model,
which gets executed over a [models at runtime engine](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.docci).
##Deploying the Cluster
First, the hadoop cluster has to be deployed. Therefore, first start the MartServer.
If the getting started VM is used the following script can be executed:
```
./startMART.sh
```
*Note:* If this scenario is not performed in a running cloud environment consider executing the resetMart.sh script first.
Thereafter, start the InitialDeployment.java file as an Java Application.
If the VM is used: Open a terminal and navigate to the VM's desktop and execute the initialDeployment.jar.
```
java -jar initialDeployment.jar
```
After the deployment has been performed you can investigate the deployed OCCI model by opening your browser and query for OCCI entitites:
```
http://localhost:8080/compute
http://localhost:8080/sensor
http://localhost:8080/monitorableproperty
```
##Starting the MAPE-K loop
To start the MAPE-K loop execute MAPE.java as a Java Application.
In the VM it is located on the desktop.
```
java -jar MAPE.jar
```
In this scenario, a Java application is started that utilizes the schematized data format of the OCCI metamodel and its extensions.
This scenario serves as an example on how to work with the OCCI ecosystem, including the monitoring extension and the models at runtime engine.
*Note:* To perform this scenario in a running cloud multiple adjustments have to be performed. Please refer to this [documentation](doc/openstack) to get started with actual cloud deployments.
##Tuning the Scenario
Again the simulated monitoring results can be adjusted as described in Section "Tuning the Scenario" of the [first scenario](doc/vertical.md).
Moreover, to investigate what is happening in this scenario it is recommended to open the OCCI-Studio and execute the scenario from here.
The class is located at:
```
de.ugoe.cs.rwm.mocci/src/main/java/MAPE.java
```
To log specific information, the logger setup, found in RegistryAndLoggerSetup.java, can be manipulated.
Especially, the Executor logger is interesting as it shows the performed REST requests against the OCCI API.
*Note:* These can be copy and pasted in order to perform manual requests using the terminal. E.g.: curl -v -X PUT ...
\ No newline at end of file
# Setting up the MartServer
The MART Server implements the OCCI API used to orchestrate the Cloud deployments.
This is major component serving as entry point for our application.
In the following a description of how a Docker container for the MART Server can be created, stored, loaded, and started.
## Creating a Docker Container
In order to provide an easy use of the MART Server, a docker container embedding the server and all required plugins is provided.
However, to use specialized plugins currently the docker image has to be recreated. Thus, a brief explanation of the single steps are given in the following:
1. Install [docker](https://docs.docker.com/install/)
2. Clone the [MOCCI repository](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci)
3. Optional: Add martserver-plugins and roles to be used by the server. Adjust the authorized_keys file for ssh access.
4. Navigate to src/test/resources/
5. Create docker image: sudo docker build -t mart-server .
6. Test the docker image: sudo docker run -p 8080:8080 -p 22:22 mart-server
7. Store the docker image: sudo docker save mart-server $>$ mart-server.tar
8. To access the container you can use an ssh connection: ssh -i \$key root@localhost
To build this container a fatjar of the MartServer is used. To use newer versions please refer to the [documentation of the MartServer](https://github.com/occiware/MartServer/blob/master/doc/server.md) in how to create a docker container.
## Loading a Docker Container
To initialize the proposed OCCI extensions, the following plugins need to be added to the OCCI-Studio.
These allow to correctly depict OCCI models in the textual and graphical editor. To Install plugins the following steps have
to be performed:
1. Download/Navigate to the archive containing the docker image
2. Load the docker image: docker load $<$ mart-server.tar
3. Start the image: sudo docker run -p 8080:8080 -p 22:22 mart-server
4. Start with bash: sudo docker run -p 8080:8080 -p 22:22 -i -t mart-server /bin/bash
5. To access the container you can use an ssh connection: ssh -i \$key root@localhost
## Configuring the MartServer to be used in OpenStack
[Documentation on how to setup and configure the MartServer for an OpenStack Cloud](doc/openstack.md)
\ No newline at end of file
# Vertical Scaling Scenario
In this scenario a hadoop cluster with one worker node getting monitored is deployed.
Thereafter, a MAPE-k loop is initialized that periodically checks whether the CPU utilization of the worker node reaches a critical level.
If that is the case a request against the OCCI API is performed, increasing the number of cores and memory available to the machine.
##Deploying the Cluster
First, the hadoop cluster has to be deployed. Therefore, first start the MartServer.
If the getting started VM is used the following script can be executed:
```
./startMART.sh
```
*Note:* If this scenario is not performed in a running cloud environment consider executing the resetMart.sh script first.
Thereafter, start the InitialDeployment.java file as an Java Application.
If the VM is used: Open a terminal and navigate to the VM's desktop and execute the initialDeployment.jar.
```
java -jar initialDeployment.jar
```
After the deployment has been performed you can investigate the deployed OCCI model by opening your browser and query for OCCI entitites:
```
http://localhost:8080/compute
http://localhost:8080/sensor
http://localhost:8080/monitorableproperty
```
##Starting the Adaptation Script
To start the script execute the vertical.sh script.
In the VM it is located on the desktop, otherwise you can find it [here](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/blob/master/src/main/resources/vertical.sh).
```
./vertical.sh
```
In this scenario, a simple bash script is used to check the gathered monitoring data and perform corresponding actions.
This scenario serves as an example on how to directly work with the OCCI API, including the monitoring extension, by simply writing small bash scripts.
*Note:* This scenario mainly serves to get started with the OCCI API. Currently, there is no connector implementing the vertical adjustment as shown in this scenario.
##Tuning the Scenario
As the adaptation only reacts on a Critical behavior, it may be interesting to tune the simulated monitoring results.
Therefore, the following steps have to be performed:
1. Stop the MartServer (CTRL-C)
2. Navigate to ~/martserver-plugins
3. Open the de.ugoe.cs.rwm.mocci.connecter.dummy.jar with the archive manager.
4. Doubleclick on the resultprovider.properties file
5. Adjust the values to your liking
The file contains the following:
```
CPU = None,Low,Medium,High,Critical,5000
```
* CPU: Represents the monitorable.property to be adjusted.
* 5000: Represents the interval in which monitoring.results are written.
* None-Critical: Represents the simulated monitoring.results.
*Note;* If you want to execute the second scenario please bring the resultprovider.properties file to its original state.
\ No newline at end of file
File moved
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment