Mocci is a toolchain using the Open Cloud Computing Interface (OCCI) to enable a model-driven cloud orchestration of cloud deployments and its monitoring instruments.
Moreover, a runtime model manager is provided that grants access to gathered monitoring data.
The following instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
## OCCI-Studio and MoDMaCAO
[OCCI-Studio](https://github.com/occiware/OCCI-Studio) is an IDE providing many convenient tools to develop around OCCI. For example, it provides a model editor, graphically and textually. Moreover, it allows to design OCCI extensions and automatically generate code from it.
In addition to OCCI-Studio the [Model-Driven Configuration Management of Cloud Applications with OCCI (MoDMaCAO)](https://github.com/occiware/MoDMaCAO) extension suite is needed.
[Documentation on how to setup and configure OCCI-Studio](doc/studio.md)
## MartServer
The [MartServer](https://github.com/occiware/MartServer) represents the OCCI interface to which requests are send in order to build and monitor the cloud application.
[Documentation on how to setup and configure the MartServer](doc/api.md)
The MART Server implements the OCCI API used to orchestrate the Cloud deployments.
This is major component serving as entry point for our application.
In the following a description of how a Docker container for the MART Server can be created, stored, loaded, and started.
## Creating a Docker Container
In order to provide an easy use of the MART Server, a docker container embedding the server and all required plugins is provided.
However, to use specialized plugins currently the docker image has to be recreated. Thus, a brief explanation of the single steps are given in the following:
6. Test the docker image: sudo docker run -p 8080:8080 -p 22:22 mart-server
7. Store the docker image: sudo docker save mart-server $>$ mart-server.tar
8. To access the container you can use an ssh connection: ssh -i \$key root@localhost
To build this container a fatjar of the MartServer is used. To use newer versions please refer to the [documentation of the MartServer](https://github.com/occiware/MartServer/blob/master/doc/server.md) in how to create a docker container.
## Loading a Docker Container
To initialize the proposed OCCI extensions, the following plugins need to be added to the OCCI-Studio.
These allow to correctly depict OCCI models in the textual and graphical editor. To Install plugins the following steps have
to be performed:
1. Download/Navigate to the archive containing the docker image
2. Load the docker image: docker load $<$ mart-server.tar
3. Start the image: sudo docker run -p 8080:8080 -p 22:22 mart-server
4. Start with bash: sudo docker run -p 8080:8080 -p 22:22 -i -t mart-server /bin/bash
5. To access the container you can use an ssh connection: ssh -i \$key root@localhost
## Configuring the MartServer to be used in OpenStack
[Documentation on how to setup and configure the MartServer for an OpenStack Cloud](doc/openstack.md)
# Setting up the MartServer to be used in OpenStack
To manage a cloud deployment using the MartServer it has to be deployed on virtual machine within the cloud that has to be connected to a management network.
TODO insert example picture.
## Configuring the MartServer in the Cloud
First you need to setup a Virtual Machine hosting the MartServer:
1. Start a Virtual Machine (we used an Ubuntu 16.04 image)
2. Insert the OpenStack Controller IP in /etc/hosts. For example: 192.168.34.1 controller.
3. Install [ansible](https://docs.ansible.com/) on the machine running the MART Server.
4. Deploy either the Docker container or export the MartServer project
Moreover a management network is required over which the MartServer connects to spawned virtual machines:
1. Create a network
2. Attach the MartServer VM to the network
TODO insert conceptual figure.
## Extension adjustments
When performing the proposed approach on a specific cloud, some extension have to be configured, as they need access to the user name and its tenant for example.
Moreover, each cloud provider has different entry points for their API and may have different kinds of offers regarding their offered computing capabilities.
### Configuration Tool Adjustments
To use configuration management tools adjustments have to be performed.
Currently, only ansible is supported.
Hereby, the path to the ansible roles on the MART server has be defined, as well as the location of the playbook, the ansible user and the location of the private key to be used for the configuration.
The settings itself can be found at martserver-plugins/org.modmacao.cm.jar in the ansible.properties file.
An example of this configuraiton is shown in the following Listing:
```
ansible_rolespath = /home/ubuntu/roles
ansible_bin = /usr/bin/ansible-playbook
ansible_user = ubuntu
private_key_path = /home/ubuntu/.ssh/key.pem
```
### Provider specifics
In order to translate the REST requests to requests of the cloud API, the MART server requires a connector for the corresponding cloud.
In order to use a connector, the connector's jar has to be placed in themartserver-plugins folder before the server is started.
Currently, we only provide a prototypical connector for OpenStack. However, also this connector has to be configured.
The settings itself can be found at in martserver-plugins/org.modmacao.openstack.connector.jar in the openstack.properties file. An example is shown in the following Listing:
Each vm possesses a flavor and an image. While the image describes the OS of the compute instance, the flavor describes its size.
Both attributes are depended on the used cloud provider, as they e.g., may offer different kinds of OS. To adjust your model according to this, different offers two approaches can be chosen:
#### Set default values in the connector
As already shown, a standard image and flavor can be configured within the OpenStack Connector.
This means, if no templates are attached to the compute instance regarding its size and OS the default values are used.
Thus, this configuration is rather convenient but limits the used to one kind of image and flavor for all VM used.
#### Create a new OCCI Extension fitting to the provider
If these settings are not enough, an OCCI Extension representing the offer of the cloud provider has to be created.
An example of such an extension is given in martserver-plugins\org.modmacao.openstack.swe.runtime.jar.
In order to create your own extension, we recommend using OCCI-Studio, as it provides not only a Sirius designer for the Extension, but also allows to completely generate the required code from the modeled Extension.
Then, the Extension has only be packed and put into the martserver-plugins folder to be ready for use.
Please Note: When creating a new Extension it has to be registered within the IDE, model, and code.
### User Data Adjustments
User data describes a specific behaviour which shall be executed when starting a VM for the first time.
n the OCCI model, the user data is described as a Mixin within each compute resource encoded as Base64.
When encoded the Base64 represents a cloud init script.
This has to be configured in such a manner that hot-plugging of networkinterfaces is allowed in each VM.
These may differ according to the network managing tool of the VM.
Furthermore, Python 2.7 has to be installed on the used VM.
If not already part of the image it can also be installed via cloud init at startup.
To check whether the installation and configuration of the IDE is successful the model can be opened in the textual editor:
1. Right click the hadoopClusterNewExt.occic file
2. Choose Open with... -> OCCI Model Editor
3. If no problem occurs, all extensions are correctly loaded
### Sirius Visualization
To visualize the model in an graphical editor it is most convenient to create a separate Sirius Modeling project. Therefore, the following steps have to be performed: