diff --git a/README.md b/README.md
index 681500c9b0ac387723470efbf00d9e9603ec2eca..b6c4015f41791eb95ed48f8277c3e3c125708d65 100644
--- a/README.md
+++ b/README.md
@@ -37,10 +37,10 @@ Next start the virtual machine:
 ## Step-by-step example scenarios
 To get the ropes of MOCCI, we provide step-by-step instructions for three example scenarios, which are based on the same initial deployment model.
 
-1. [Initial Deployment] (doc/intial.md): This tutorial demonstrates how to deploy an initial cloud application getting monitored by MOCCI.
-1. [Vertical Scaling](doc/vertical.md): This scenario scales a VM in the initial deployment up and down according to its CPU utilization.
-2. [Horizontal Scaling](doc/horizontal.md): This scenario dynamically adds and releases worker nodes in the hadoop cluster of the initial deployment.
-3. [Sensor Creation](doc/own.md): This scenario shows how to add sensors to an OCCI model.
+1. [Initial Deployment](doc/initial.md): This tutorial demonstrates how to deploy an initial cloud application getting monitored by MOCCI.
+2. [Vertical Scaling](doc/vertical.md): This scenario scales a VM in the initial deployment up and down according to its CPU utilization.
+3. [Horizontal Scaling](doc/horizontal.md): This scenario dynamically adds and releases worker nodes in the hadoop cluster of the initial deployment.
+4. [Sensor Creation](doc/own.md): This scenario shows how to add sensors to an OCCI model.
 
 *Note:* Please note that the execution of the example scenarios in an distributed environment, e.g., requires access to it as well as connectors implementing how the differen requests should be handled.
 Thus, the provided scenarios are based on monitoring data simulated by the [MOCCI connector dummy](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/tree/master/de.ugoe.cs.rwm.mocci.connector.dummy).
diff --git a/de.ugoe.cs.rwm.mocci.connector/README.md b/de.ugoe.cs.rwm.mocci.connector/README.md
index 814a8be5d0f298d77476d31903cdf5755cab2ed9..f63d8cd40336f24a7fa84e8aa737ea984fe39753 100644
--- a/de.ugoe.cs.rwm.mocci.connector/README.md
+++ b/de.ugoe.cs.rwm.mocci.connector/README.md
@@ -1,4 +1,4 @@
-# Monitoring Extension Connector Dummy
+# Monitoring Extension Connector
 This component represents the connector dummy for the OCCI monitoring extension. The skeleton for this connector is generated using [OCCI-Studio](https://github.com/occiware/OCCI-Studio).
 Hereby, for each element in the monitoring extension a single connector file is present, implementing how to react on different REST requests addressing the corresponding OCCI element.
 As the elements of the monitoring extension mainly inherit from elements of the enhanced platform extension provided by [MoDMaCAO](https://github.com/occiware/MoDMaCAO), the implementation of the lifecycle actions is quite similar. To handle the management of each individual component of a sensor, configuration management scripts have to be attached to them.
diff --git a/doc/horizontal.md b/doc/horizontal.md
index d71b89535401e6560611b42e86eef43882700aa2..1204c236eaeb1b182ee1746e9df47cbd9a108d3b 100644
--- a/doc/horizontal.md
+++ b/doc/horizontal.md
@@ -1,5 +1,5 @@
 # Horizontal Scaling Scenario
-Prerequisite for this scenario is the initial deployment of the [hadoop cluster](./initial.md).
+Prerequisite for this scenario is the [initial deployment](./initial.md) of the hadoop cluster.
 In this scenario a hadoop cluster with one worker node is deployed and scaled according to gathered CPU utilization.
 Therefore, a MAPE-k loop is initialized that periodically checks the CPU utilization of all worker nodes.
 Thereafter, it is decided whether additional worker nodes are required (scaleUp) or not (scaleDown).
@@ -8,15 +8,24 @@ which gets executed over a [models at runtime engine](https://gitlab.gwdg.de/rwm
 
 
 ## Starting the MAPE-K loop
+Before the MAPE-K loop is started make sure that the MartServer is running and the hadoop cluster has been deployed.
+In this scenario, a Java application is started that utilizes the schematized data format of the OCCI metamodel and its extensions.
+This scenario serves as an example on how to work with the OCCI ecosystem, including the monitoring extension and the models at runtime engine.
 To start the MAPE-K loop execute MAPE.java as a Java Application.
-In the VM it is located on the desktop.
+In the VM it is located on the desktop. Open a new terminal and navigate to the desktop. You can start the loop by executing the following command:
 ```
 java -jar MAPE.jar
 ```
-In this scenario, a Java application is started that utilizes the schematized data format of the OCCI metamodel and its extensions.
-This scenario serves as an example on how to work with the OCCI ecosystem, including the monitoring extension and the models at runtime engine.
+If you want to get more information about the requests performed by the models at runtime engine you can alternatively start the MAPE_Exec_Info.jar.
+Again you can check the amount of deployed resources by using your browser. Depending on whether the system up or downscales different amounts of worker nodes are present in the runtime model.
+Additionally, on an upscale the MartServer should log multiple simulated monitoring results.
+The compute nodes as well as their monitorableproperty can be accessed by the following example queries:
+```
+http://localhost:8080/compute/
+http://localhost:8080/monitorableproperty/
+```
 
-*Note:* To perform this scenario in a running cloud multiple adjustments have to be performed. Please refer to this [documentation](doc/openstack) to get started with actual cloud deployments.
+*Note:* To perform this scenario in a running cloud multiple adjustments have to be performed. In the code itself only the kind of connector has to be adjusted from LocalhostConnector to MartConnector.Please . Moreover, an MartServer without actual connectors has to be created. We used the connectors located [here](../src/test/resources/martserver-plugins/live). Please refer to this [documentation](doc/openstack) to get started with actual cloud deployments.
 
 ## Tuning the Scenario
 Again the simulated monitoring results can be adjusted as described in Section "Tuning the Scenario" of the [first scenario](doc/vertical.md).
diff --git a/doc/own.md b/doc/own.md
index 4ee40ed28c07fe5efd849aa5094e693e0b508200..236af363f0569c157ad24f5f18775a909520c1d1 100644
--- a/doc/own.md
+++ b/doc/own.md
@@ -2,7 +2,7 @@
 In this scenario the hadoop cluster model gets enhanced to monitor Memory utilization in addition to CPU utilization of worker nodes.
 
 ## Enhancing the Model for Simulation Purposes
-The following description shows how to adapt the initial deployment using an editor within OCCIWare Studio:
+The following description shows how to adapt the initial deployment model using an editor within OCCIWare Studio:
 
 1. Start OCCIWare Studio.
 2. Create a new Sirius Modeling Project
@@ -10,17 +10,23 @@ The following description shows how to adapt the initial deployment using an edi
 4. Open the file with the OCCI Model Editor
 5. Expand the first item, as well as the Configuration it contains
 6. Right click on Configuration->New Child->Resource
-   1. Set the kind of the new resource to Sensor
+   1. Open the properties view and set the kind of the new resource to Sensor
    2. Right click on the Sensor to add a new link
-      1. Set the kind of the link to monitorableproperty
+      1. Set the kind of the link to Monitorableproperty
       2. Set the target of the link to the compute node it should monitor
-      3. Add a new Attribute to the monitorableproperty link
-          1.  Name it monitoring.property with value Mem
-7. Create a ResultProvider and a DataGatherer in the same way
-8. Create a ComponentLink linking the Sensor to the ResultProvider and DataGatherer
+      3. Add a new attribute to the Monitorableproperty link
+          1. Name the attribute monitoring.property
+          2. Set the value of the attribute to Mem
+7. Add two new resources to the Configuration
+    1. Set the kind of one resource to ResultProvider 
+    2. Set the kind of the other resource to DataGatherer
+8. Create two ComponentLink within the Sensor
+     1. Link one to the ResultProvider
+     2. Link the other to the DataGatherer
 9. Save your changed
 
-Even though the model currently does not consist of all monitoring instruments, and placementlinks, which are required for an actual deployment, it is sufficient to execute first tests. To test the enhanced model follow the enumerated instructions:
+Even though the model currently does not of placementlinks required for an actual deployment, it is sufficient to perform the simulation.
+To test the enhanced model follow the enumerated instructions:
 
 1. Copy the enhanced model to the desktop
 2. Open a terminal and navigate to the desktop
@@ -30,8 +36,8 @@ Even though the model currently does not consist of all monitoring instruments,
 ```
 java -jar initialDeployment.jar hadoopClusterCPU.occic
 ```
-Now your enhanced model got deployed.
-Next start the added sensor by executing the following REST request.
+This command puts your adjusted model into the models at runtime engine.
+To start the added sensor execute the following REST request.
 For example, curl can be used by starting another terminal.
 
 ```
@@ -39,12 +45,17 @@ curl -v -X POST http://localhost:8080/sensor/sensorid/?action=undeploy -H 'Conte
 ```
 
 Finally, the added Sensors monitorable property should be filled with simulated monitoring data.
+However, this time the simulated data corresponds to the ones that are simulated for Mem.
+To adjust the simulation follow the instructions given in the tuning the scenario section of the [vertical scaling scenario](./vertical.md).
 
 
 ## Adjusting the Model for an actual deployment.
-To finish the model you can add DataProcessor and DataGatherer the same way as the ResultProvider was created. 
+To make the model ready for an actual deployment placementlinks in each component of the sensor has to be added.
+Moreover, the individual components have to be tagged with user mixins refering to their configuration management scripts.
+Please refer to the [connector documentation](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/tree/master/de.ugoe.cs.rwm.mocci.connector) of the monitoring extension for more information.
+The model for a concrete deployment of a memory monitoring sensor, including its configuration management script, can be found [here](../src/main/resources/hadoopClusterNewExtWithMem.occic).
 
 ## Visualizing the Model
-1. Doubleclick on representations.aird
-2. Choose configuration 
-3. Collapse all elements and reorder them
\ No newline at end of file
+To visualize the model the following steps can be performed.
+In the sirius modeling project doubleclick on the representations.aird file and
+double click on Configuration. It is recommended to collapse all elements and perform a reordering of them making the model more accessible.
\ No newline at end of file
diff --git a/doc/vertical.md b/doc/vertical.md
index e21f3878c7a26d9a657ba51043c27770a5e6ea5c..411a533c55f7b8f6f3e3d7726349c445a5270247 100644
--- a/doc/vertical.md
+++ b/doc/vertical.md
@@ -1,21 +1,28 @@
 # Vertical Scaling Scenario
-Prerequisite for this scenario is the initial deployment of the [hadoop cluster](./initial.md).
+Prerequisite for this scenario is the [initial deployment](./initial.md) of the hadoop cluster.
 Thereafter, a MAPE-k loop is initialized that periodically checks whether the CPU utilization of the worker node reaches a critical level.
 If that is the case a request against the OCCI API is performed, increasing the number of cores and memory available to the machine.
 
 ## Starting the Adaptation Script
-To start the script execute the vertical.sh script.
-In the VM it is located on the desktop, otherwise you can find it [here](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/blob/master/src/main/resources/vertical.sh).
+In this scenario, a simple bash script is used to check the gathered monitoring data and perform corresponding actions.
+This scenario serves as an example on how to directly work with the OCCI API, including the monitoring extension, by simply writing small bash scripts.
+Before the adaptation script is started make sure that the MartServer is running and the hadoop cluster has been deployed.
+To start the script execute the [vertical.sh](https://gitlab.gwdg.de/rwm/de.ugoe.cs.rwm.mocci/blob/master/src/main/resources/vertical.sh) script.
+In the getting started VM the script is located on the desktop. Create a terminal navigate to the desktop and start the script with the following command:
 ```
 ./vertical.sh
 ```
-In this scenario, a simple bash script is used to check the gathered monitoring data and perform corresponding actions.
-This scenario serves as an example on how to directly work with the OCCI API, including the monitoring extension, by simply writing small bash scripts.
+
+When a critical cpu utilization is present in the model the monitored VM is upscaled from 2 to 8 cores.
+Otherwise it is downscaled to 2 cores. The current amount of cores can be easiest checked by querying the compute node using a browser:
+```
+http://localhost:8080/compute/urn:uuid:2e6a73d0-faaa-476a-bd25-ca461dd166cf/
+```
 
 *Note:* This scenario mainly serves to get started with the OCCI API. Currently, there is no connector implementing the vertical adjustment as shown in this scenario.
 
 ## Tuning the Scenario
-As the adaptation only reacts on a Critical behavior, it may be interesting to tune the simulated monitoring results.
+As the adaptation only upscales on a Critical behavior, it may be interesting to tune the simulated monitoring results.
 Therefore, the following steps have to be performed:
 1. Stop the MartServer (CTRL-C)
 2. Navigate to ~/martserver-plugins