Commit 6ef77af6 authored by Rita Sousa's avatar Rita Sousa

Update README with NuvlaBox approach and other configuration files

parent f7bfdc8f
......@@ -97,6 +97,5 @@ app/.idea/
app/*.txt
app/pidsToMonitor.txt
dataclay
app/fakeWorkers/you
app/pidsToMonitor.txt
[submodule "dataclay"]
path = dataclay
url = https://gitlab.bsc.es/elastic-h2020/elastic-sa/nfr-tool/dataclay.git
url = https://gitlab.bsc.es/elastic-h2020/elastic-sa/nfr-tool/dataclay
......@@ -20,24 +20,26 @@ git submodule update
# SETUP
There are two options:
## 1. In the device/machine
To use this demo, a connection to dataClay is mandatory. Before establishing this connection, run the **setupForDataclay.sh** script to configure some necessary settings.
You can check usage by executing:
```
./setupForDataclay.sh -h
```
In this demo, a fake ElasticSystem is created. To have several fake Workers (Docker containers) available, you should run:
```
cd app/fakeWorkers/
docker build -t fake_docker_workers .
```
or
In this demo, **Global Resource Manager (GRM)** (https://gitlab.bsc.es/elastic-h2020/elastic-sa/nfr-tool/grm-global_resource_manager) must be running to create a fake ElasticSystem. Then, to have several fake Workers (Docker containers) available, you should run:
```
docker pull rita09sousa/fake_docker_workers:2.0
docker image tag rita09sousa/fake_docker_workers:2.0 fake_docker_workers
```
OR
```
cd app/fakeWorkers/
docker build -t fake_docker_workers .
```
Then, start some Workers with constant resource usage:
Then, start some fake Workers with constant resource usage:
```
docker run -d --rm -it -e FAKE_WORKER=<NUM_FAKEWORKER> --name fakeworker<NUM_FAKEWORKER> fake_docker_workers
```
......@@ -50,30 +52,73 @@ cd ..
```
to update some files with the necessary configurations to create the fake Workers.
**NOTE:** This demo uses NuvlaBox telemetry. This video https://bit.ly/3pyekR4 shows how to install the NuvlaBox on some Node (Edge device) and this https://bit.ly/3cyJRPd shows how to use the NuvlaBox's Data Gateway, which provides the telemetry/resources metrics.
**NOTE:** This demo uses NuvlaBox telemetry. This video https://bit.ly/3pyekR4 shows how to install the NuvlaBox on some Node (Edge device) and this https://bit.ly/3cyJRPd shows how to use the NuvlaBox's Data Gateway, which provides the telemetry/resources metrics.
However, you only need to install NuvlaBox. Watch the first video or follow the steps below:
1. Log in to Nuvla.
2. In the left bar, select **Nuvlabox**, and then **+ Add**.
3. Fill in the name and description, and under INSTALLATION METHOD, choose Compose file bundle.
4. Click **create** and then follow the steps presented under Quick Installation.
Go to nfrtool-time-and-energy/ directory.
Start the dataClay, destroying any dataClay instance that could be active.
```
docker-compose -f master-dataclay.yml -f backend-dataclay.yml down -v --remove-orphans
```
If this Node is the **"master"** node, run:
```
docker-compose -f dataclay-master-docker-compose.yml down -v
docker-compose -f dataclay-master-docker-compose.yml up --build
docker-compose -f master-dataclay.yml up --build
```
If this Node is one of the **backend** nodes, execute:
```
docker-compose -f dataclay-backend-docker-compose.yml down -v
docker-compose -f dataclay-backend-docker-compose.yml up --build
docker-compose -f backend-dataclay.yml up --build
```
**NOTE:** dcinitializer should only start when all dataClay nodes are up!
Then start the Global Resource Manager, wait for the dcinitializer to finish, and start the NFR Tools on the different Nodes, executing:
Then start the Global Resource Manager (from the respective respositories), **wait for the dcinitializer to finish**, and start the NFR Tools on the different Nodes, executing:
```
docker-compose down -v
docker-compose up --build
```
## 2. Through Nuvla
Log in to Nuvla.
In the left bar, select Apps, and launch the **dataClay standalone** application.
1. Select the infrastructure service you want to deploy dataClay (see the **NOTE A**)
2. Modify the Environment variables needed:
Environment variables | Value
--- | ---
| HOSTNAME | < IP address of the device > |
| USER | NFRtoolUser |
| GIT_JAVA_MODELS_URLS | https://gitlab.bsc.es/elastic-h2020/elastic-sa/nfr-tool/dataclay |
| GIT_JAVA_MODELS_PATHS | model |
| JAVA_NAMESPACES | ElasticNFR |
3. Accept the license agreement
4. Launch
In the left bar, select Apps, and launch the **NFR Tool** application.
1. Select the infrastructure service you want to deploy NFR Tool (see the **NOTE A**)
2. Modify the Environment variables needed:
Environment variables | Value
--- | ---
| HOSTNAME | < IP address of the device with dataClay logic module> |
3. Launch
**NOTE A:** If you don't have an infrastructure service, watch this video https://bit.ly/3pyekR4
# Demo behavior
The NFR Tool will search for a fake Elastic System with an alias equals to "system".
Then, it pulls data from Nuvla telemetry and Docker API, evaluates the resources' consumption and when necessary publishes NFRViolations in the "violations" queue name (topic) to GRM's evaluation.
# Acknowledgements
This work has been supported by the EU H2020 project ELASTIC, contract #825473.
......@@ -3,7 +3,7 @@ FROM bscdataclay/client:dev20210312-alpine
ENV WORKING_DIR=/demo
ARG DC_SHARED_VOLUME=/srv/dataclay/shared
ARG DEFAULT_NAMESPACE=defaultNS
ARG DEFAULT_USER=xavier-rit
ARG USER=xavier-rit
ARG DEFAULT_PASS=defaultPass
ARG DEFAULT_STUBS_JAR=/demo/stubs.jar
ARG DEFAULT_STUBS_PATH=/demo/stubs
......@@ -13,7 +13,7 @@ ENV DC_SHARED_VOLUME=${DC_SHARED_VOLUME} \
DATACLAYGLOBALCONFIG=${WORKING_DIR}/global.properties \
DATACLAYSESSIONCONFIG=${WORKING_DIR}/session.properties \
NAMESPACE=${DEFAULT_NAMESPACE} \
USER=${DEFAULT_USER} \
USER=${USER} \
PASS=${DEFAULT_PASS} \
STUBSPATH=${DEFAULT_STUBS_PATH} \
STUBS_JAR=${DEFAULT_STUBS_JAR}
......
Account=xavier-rit
Account=NFRtoolUser
Password=defaultPass
DataSets=defaultDS
DataSetForStore=defaultDS
......
......@@ -5,10 +5,11 @@ ENV FAKE_WORKER 3
WORKDIR /dockerFakeWorkers/
COPY ./fakeworker.py .
RUN chmod +x ./fakeworker.py
ENTRYPOINT ["./fakeworker.py"]
CMD ["${FAKE_WORKER}"]
#ENTRYPOINT ["./fakeworker.py"]
#CMD ["${FAKE_WORKER}"]
CMD python3 fakeworker.py ${FAKE_WORKER}
########################## OLD AND HEAVY FAKEWORKERS ##########################
# FROM gcc:4.9
......
......@@ -42,7 +42,7 @@ try:
print("Fakeworker ID wrong. It should be number between 1 and 6.")
exit(1)
except ValueError:
print("Argument is not an int")
print(f"Argument {sys.argv[1]} is not an int")
exit(1)
if idFakeWorker == 1:
......
version: '3.5'
services:
dsjava:
image: "bscdataclay/dsjava:dev20210312-alpine"
ports:
- "2127:2127"
volumes:
- /opt/dataclay/storage:/dataclay/storage:rw
environment:
- DATASERVICE_HOST=${DATASERVICE_HOST:-192.168.60.68}
- DATASERVICE_NAME=${BACKEND_NAME:-DS2}
- DATASERVICE_JAVA_PORT_TCP=${DSJAVA_PORT:-2127}
- LOGICMODULE_PORT_TCP=${LOGICMODULE_PORT:-11034}
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
dspython:
image: "bscdataclay/dspython:dev20210312-alpine"
ports:
- "6867:6867"
volumes:
- /opt/dataclay/storage:/dataclay/storage:rw
depends_on:
- dsjava
environment:
- DATASERVICE_HOST=${DATASERVICE_HOST:-192.168.60.68}
- DATASERVICE_NAME=${BACKEND_NAME:-DS2}
- DATASERVICE_PYTHON_PORT_TCP=${DSPYTHON_PORT:-6867}
- LOGICMODULE_PORT_TCP=${LOGICMODULE_PORT:-11034}
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
version: '3.5'
volumes:
dataclay-init:
driver: local
networks:
default:
external:
name: nuvlabox-shared-network
services:
dsjava:
image: "bscdataclay/dsjava:alpine"
ports:
- "2127:2127"
environment:
- DATASERVICE_HOST=192.168.60.68
- DATASERVICE_NAME=DS1
- DATASERVICE_JAVA_PORT_TCP=2127
- LOGICMODULE_PORT_TCP=11034
- LOGICMODULE_HOST=192.168.60.18
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
\ No newline at end of file
version: '3.5'
volumes:
dataclay-init:
driver: local
networks:
default:
external:
name: nuvlabox-shared-network
services:
logicmodule:
image: "bscdataclay/logicmodule:alpine"
ports:
- "11034:11034"
environment:
- LOGICMODULE_PORT_TCP=11034
- LOGICMODULE_HOST=192.168.60.18
- DATACLAY_ADMIN_USER=admin
- DATACLAY_ADMIN_PASSWORD=admin
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
dsjava:
image: "bscdataclay/dsjava:alpine"
ports:
- "3127:3127"
environment:
- DATASERVICE_HOST=192.168.60.68
- DATASERVICE_NAME=DS2
- DATASERVICE_JAVA_PORT_TCP=3127
- LOGICMODULE_PORT_TCP=11034
- LOGICMODULE_HOST=192.168.60.18
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
......@@ -17,8 +17,8 @@ services:
build: ./app
environment:
- LOGICMODULE_PORT_TCP=11034
- LOGICMODULE_HOST=192.168.60.18
- USER=NFRtoolUser
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
- USER=${USER:-NFRtoolUser}
- PASS=${PASS:-defaultPass}
- DATASET=${DATASET:-defaultDS}
- NAMESPACE=ElasticNFR
......
version: '3.5'
services:
dcinitializer:
image: "bscdataclay/initializer:dev20210312-alpine"
depends_on:
- logicmodule
volumes:
- /opt/dataclay/shared:/srv/dataclay/shared:rw
- /opt/dataclay/model:/model/:rw
environment:
- LOGICMODULE_PORT_TCP=${LOGICMODULE_PORT:-11034}
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
- USER=${USER:-NFRtoolUser}
- PASS=${PASS:-defaultPass}
- DATASET=${DATASET:-defaultDS}
- PYTHON_MODELS_PATH=$PYTHON_MODELS_PATH
- PYTHON_NAMESPACES=$PYTHON_NAMESPACES
- JAVA_MODELS_PATH=$JAVA_MODELS_PATH
- JAVA_NAMESPACES=${JAVA_NAMESPACES:-ElasticNFR}
- GIT_JAVA_MODELS_URLS=${GIT_JAVA_MODELS_URLS:-https://gitlab.bsc.es/elastic-h2020/elastic-sa/nfr-tool/dataclay}
- GIT_JAVA_MODELS_PATHS=${GIT_JAVA_MODELS_PATHS:-model}
- GIT_PYTHON_MODELS_URLS=$GIT_PYTHON_MODELS_URLS
- GIT_PYTHON_MODELS_PATHS=$GIT_PYTHON_MODELS_PATHS
- IMPORT_MODELS_FROM_EXTERNAL_DC_HOSTS=$IMPORT_MODELS_FROM_EXTERNAL_DC_HOSTS
- IMPORT_MODELS_FROM_EXTERNAL_DC_PORTS=$IMPORT_MODELS_FROM_EXTERNAL_DC_PORTS
- IMPORT_MODELS_FROM_EXTERNAL_DC_NAMESPACES=$IMPORT_MODELS_FROM_EXTERNAL_DC_NAMESPACES
healthcheck:
interval: 5s
retries: 10
test: [ "CMD-SHELL", "/dataclay-initializer/health_check.sh" ]
logicmodule:
image: "bscdataclay/logicmodule:dev20210312-alpine"
ports:
- "11034:11034"
volumes:
- /opt/dataclay/storage:/dataclay/storage:rw
environment:
- LOGICMODULE_PORT_TCP=${LOGICMODULE_PORT:-11034}
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
- DATACLAY_ADMIN_USER=admin
- DATACLAY_ADMIN_PASSWORD=admin
- EXPOSED_IP_FOR_CLIENT=$EXPOSED_IP_FOR_CLIENT
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
dsjava:
image: "bscdataclay/dsjava:dev20210312-alpine"
ports:
- "2127:2127"
depends_on:
- logicmodule
volumes:
- /opt/dataclay/storage:/dataclay/storage:rw
environment:
- DATASERVICE_HOST=${DATASERVICE_HOST:-192.168.60.68}
- DATASERVICE_NAME=${BACKEND_NAME:-DS1}
- DATASERVICE_JAVA_PORT_TCP=${DSJAVA_PORT:-2127}
- LOGICMODULE_PORT_TCP=${LOGICMODULE_PORT:-11034}
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
dspython:
image: "bscdataclay/dspython:dev20210312-alpine"
ports:
- "6867:6867"
depends_on:
- logicmodule
- dsjava
volumes:
- /opt/dataclay/storage:/dataclay/storage:rw
environment:
- DATASERVICE_HOST=${DATASERVICE_HOST:-192.168.60.68}
- DATASERVICE_NAME=${BACKEND_NAME:-DS1}
- DATASERVICE_PYTHON_PORT_TCP=${DSPYTHON_PORT:-6867}
- LOGICMODULE_PORT_TCP=${LOGICMODULE_PORT:-11034}
- LOGICMODULE_HOST=${LOGICMODULE_HOST:-192.168.60.28}
stop_grace_period: 5m
healthcheck:
interval: 5s
retries: 10
test: ["CMD-SHELL", "/home/dataclayusr/dataclay/health/health_check.sh"]
#!/bin/sh
if [ $# -eq 0 ]
if [ $1 = '-h' ]
then
printf "Invalid arguments supplied!\nRun $0 -h to check usage.\n"
exit 1
printf "If this Node has dataClay's logicModule, use:\n\t $0 LM <user>\n If this Node is a dataClay Node backend, use:\n\t $0 <ipLogicModule> <user>\n"
exit 0
fi
if [ $1 = '-h' ]
then
printf "If this Node has dataClay's logicModule, use:\n\t $0 LM\nIf this Node is a dataClay Node backend, use:\n\t $0 <ipLogicModule>\n"
exit 0
if [ $# -ne 2 ]
then
printf "Illegal number of parameteres supplied!\nRun $0 -h to check usage.\n"
exit 1
fi
# Search for Ethernet IP address
......@@ -29,7 +29,7 @@ then
fi
# Insert IP address of machine in config files
sed -i -e "s/- DATASERVICE_HOST=.*/- DATASERVICE_HOST=$ip/" *.yml
sed -i -e "s/- DATASERVICE_HOST=.*/- DATASERVICE_HOST=\${DATASERVICE_HOST:-$ip}/" *.yml
if [ $1 != 'LM' ] && [ $1 != 'lm' ]
then
......@@ -37,13 +37,13 @@ then
ip=$1
fi
echo "Account=$USER
echo "Account=$2
Password=defaultPass
DataSets=defaultDS
DataSetForStore=defaultDS
StubsClasspath=./stubs" | tee ./dataclay/cfgfiles/session.properties > /dev/null
echo "Account=$USER
echo "Account=$2
Password=defaultPass
DataSets=defaultDS
DataSetForStore=defaultDS
......@@ -54,4 +54,7 @@ StubsClasspath=../dataclay/stubs" | tee ./app/cfgfiles/session.properties > /dev
echo "HOST=$ip
TCPPORT=11034" > ./dataclay/cfgfiles/client.properties
sed -i -e "s/- LOGICMODULE_HOST=.*/- LOGICMODULE_HOST=$ip/" *.yml
sed -i -e "s/- LOGICMODULE_HOST=.*/- LOGICMODULE_HOST=\${LOGICMODULE_HOST:-$ip}/" *.yml
sed -i -e "s/- USER=.*/- USER=\${USER:-$2}/" *.yml
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment