Issue with container size

Good morning everyone, I’m having issues with one of my production environment
After a reboot the site didn’t go up, I found in the logs that the issue was the space
This machine has 120 GB and the following containers

CONTAINER ID   IMAGE                          COMMAND                  CREATED         STATUS                             PORTS                                                              NAMES
bfe60017f838   openremote/proxy:latest        "/entrypoint.sh run"     16 months ago   Up 2 days (healthy)                0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8883->8883/tcp   openremote_proxy_1
2fc09ccc342a   openremote/manager:latest      "/bin/sh -c 'java $O…"   16 months ago   Up 3 minutes (health: starting)    1883/tcp, 8080/tcp, 8443/tcp                                       openremote_manager_1
7d2a13629a6f   openremote/keycloak:latest     "/bin/sh -c '/opt/ke…"   16 months ago   Restarting (1) 3 seconds ago                                                                          openremote_keycloak_1
3dfdf6f68cdf   openremote/postgresql:latest   "docker-entrypoint.s…"   16 months ago   Restarting (1) 50 seconds ago                                                                         openremote_postgresql_1
fe2c54b356ed   openremote/keycloak:latest     "/bin/sh -c '/opt/ke…"   20 months ago   Up 21 seconds (health: starting)   8080/tcp, 8443/tcp                                                 openremoted_keycloak_1
7f3751577bb5   openremote/postgresql:latest   "docker-entrypoint.s…"   20 months ago   Restarting (1) 51 seconds ago

I checked for the file size and I found this log file (?) of 77 GB, how can I fix this?

 ls -l --block-size=M
total 77294M
-rw-r----- 1 root root 77294M Jul 29 07:27 2fc09ccc342a4358982771f91a5766b580fe510baa2add2102004030ba82c4af-json.log
drwx------ 2 root root     1M Mar 24  2023 checkpoints
-rw------- 1 root root     1M Jul 29 07:32 config.v2.json
-rw-r--r-- 1 root root     1M Jul 29 07:32 hostconfig.json
-rw-r--r-- 1 root root     1M Jul 29 07:27 hostname
-rw-r--r-- 1 root root     1M Jul 29 07:27 hosts
drwx--x--- 3 root root     1M Mar 24  2023 mounts
-rw-r--r-- 1 root root     1M Jul 29 07:27 resolv.conf
-rw-r--r-- 1 root root     1M Jul 29 07:27 resolv.conf.hash

The container is the manager

Thank you for your help!

Update: After a snapshot restore I manually deleted that file.
Now the exact same file is increasing in the proxy container

/var/lib/docker/containers/bfe60017f8384899139ce299d0ef295ca66094a89b1b74535f96450c0df0c59b# ls -l --block-size=M
total 2992M
-rw-r----- 1 root root 2992M Jul 29 15:13 bfe60017f8384899139ce299d0ef295ca66094a89b1b74535f96450c0df0c59b-json.log
drwx------ 2 root root    1M Mar 24  2023 checkpoints
-rw------- 1 root root    1M Jul 29 15:13 config.v2.json
-rw-r--r-- 1 root root    1M Jul 29 15:13 hostconfig.json
-rw-r--r-- 1 root root    1M Jul 29 14:50 hostname
-rw-r--r-- 1 root root    1M Jul 29 14:50 hosts
drwx--x--- 3 root root    1M Mar 24  2023 mounts
-rw-r--r-- 1 root root    1M Jul 29 14:50 resolv.conf
-rw-r--r-- 1 root root    1M Jul 29 14:50 resolv.conf.hash

The file content is full of the following:

{"log":"29/Jul/2024:15:16:26 +0000 http 127.0.0.1:50058 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:26.15443121Z"}
{"log":"29/Jul/2024:15:16:29 +0000 http 127.0.0.1:50072 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:29.241142941Z"}
{"log":"29/Jul/2024:15:16:32 +0000 http 127.0.0.1:50518 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:32.333968169Z"}
{"log":"29/Jul/2024:15:16:35 +0000 http 127.0.0.1:50520 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:35.431524708Z"}
{"log":"29/Jul/2024:15:16:38 +0000 http 127.0.0.1:50528 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:38.510722245Z"}
{"log":"29/Jul/2024:15:16:41 +0000 http 127.0.0.1:50540 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:41.589517686Z"}
{"log":"29/Jul/2024:15:16:44 +0000 http 127.0.0.1:53380 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:44.666830364Z"}
{"log":"29/Jul/2024:15:16:47 +0000 http 127.0.0.1:53396 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:47.752710049Z"}
{"log":"29/Jul/2024:15:16:50 +0000 http 127.0.0.1:53398 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:50.832424318Z"}
{"log":"29/Jul/2024:15:16:53 +0000 http 127.0.0.1:35188 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:53.905501043Z"}
{"log":"29/Jul/2024:15:16:56 +0000 http 127.0.0.1:35202 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:16:56.982445265Z"}
{"log":"29/Jul/2024:15:17:00 +0000 http 127.0.0.1:35210 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:00.062099119Z"}
{"log":"29/Jul/2024:15:17:03 +0000 http 127.0.0.1:33014 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:03.143624976Z"}
{"log":"29/Jul/2024:15:17:06 +0000 http 127.0.0.1:33016 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:06.238741064Z"}
{"log":"29/Jul/2024:15:17:09 +0000 http 127.0.0.1:33024 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:09.329004562Z"}
{"log":"29/Jul/2024:15:17:12 +0000 http 127.0.0.1:42782 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:12.443470632Z"}
{"log":"29/Jul/2024:15:17:15 +0000 http 127.0.0.1:42796 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:15.52656722Z"}
{"log":"29/Jul/2024:15:17:18 +0000 http 127.0.0.1:42802 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:18.609003915Z"}
{"log":"29/Jul/2024:15:17:21 +0000 http 127.0.0.1:42808 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:21.707147444Z"}
{"log":"29/Jul/2024:15:17:24 +0000 http 127.0.0.1:37388 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:24.782215474Z"}
{"log":"29/Jul/2024:15:17:27 +0000 https~ 34.246.131.0:14153 manager 0/0/1/2/3 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:27.861243947Z"}
{"log":"29/Jul/2024:15:17:27 +0000 http 127.0.0.1:37398 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:27.867514436Z"}
{"log":"29/Jul/2024:15:17:27 +0000 http 34.246.131.0:11341 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET /manager HTTP/1.1\" 302 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:27.901494748Z"}
{"log":"29/Jul/2024:15:17:27 +0000 https~ 34.246.131.0:14153 manager 0/0/0/1/1 \"GET /manager HTTP/1.1\" 302 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:27.923232445Z"}
{"log":"29/Jul/2024:15:17:27 +0000 https~ 34.246.131.0:14153 manager 0/0/0/1/1 \"GET /manager/ HTTP/1.1\" 200 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:27.945232248Z"}
{"log":"29/Jul/2024:15:17:30 +0000 http 127.0.0.1:37410 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:30.953412638Z"}
{"log":"29/Jul/2024:15:17:34 +0000 http 127.0.0.1:38712 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:34.032878291Z"}
{"log":"29/Jul/2024:15:17:37 +0000 http 127.0.0.1:38724 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:37.112599906Z"}
{"log":"29/Jul/2024:15:17:38 +0000 https~ 34.246.131.0:42116 manager 0/0/0/8/8 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:38.379254738Z"}
{"log":"29/Jul/2024:15:17:38 +0000 http 34.246.131.0:46465 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET /manager HTTP/1.1\" 302 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:38.418359188Z"}
{"log":"29/Jul/2024:15:17:38 +0000 https~ 34.246.131.0:42116 manager 0/0/0/0/0 \"GET /manager HTTP/1.1\" 302 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:38.441766747Z"}
{"log":"29/Jul/2024:15:17:38 +0000 https~ 34.246.131.0:42116 manager 0/0/0/5/5 \"GET /manager/ HTTP/1.1\" 200 6/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:38.46866882Z"}
{"log":"29/Jul/2024:15:17:40 +0000 http 127.0.0.1:38738 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:40.199538178Z"}
{"log":"29/Jul/2024:15:17:43 +0000 http 127.0.0.1:53894 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:43.318877093Z"}
{"log":"29/Jul/2024:15:17:46 +0000 http 127.0.0.1:53904 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:46.404367732Z"}
{"log":"29/Jul/2024:15:17:49 +0000 http 127.0.0.1:53910 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:49.501264717Z"}
{"log":"29/Jul/2024:15:17:52 +0000 http 127.0.0.1:55920 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:52.577090448Z"}
{"log":"29/Jul/2024:15:17:55 +0000 http 127.0.0.1:55926 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:55.652792702Z"}
{"log":"29/Jul/2024:15:17:58 +0000 http 127.0.0.1:55940 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:17:58.747411937Z"}
{"log":"29/Jul/2024:15:18:01 +0000 http 127.0.0.1:55944 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:01.838803719Z"}
{"log":"29/Jul/2024:15:18:04 +0000 http 127.0.0.1:35634 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:04.924525551Z"}
{"log":"29/Jul/2024:15:18:08 +0000 http 127.0.0.1:35638 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:08.005899219Z"}
{"log":"29/Jul/2024:15:18:11 +0000 http 127.0.0.1:35652 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:11.079536625Z"}
{"log":"29/Jul/2024:15:18:14 +0000 http 127.0.0.1:58948 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:14.155598592Z"}
{"log":"29/Jul/2024:15:18:17 +0000 http 127.0.0.1:58962 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:17.245717958Z"}
{"log":"29/Jul/2024:15:18:20 +0000 http 127.0.0.1:58966 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:20.324409338Z"}
{"log":"29/Jul/2024:15:18:23 +0000 http 127.0.0.1:35472 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:23.405300492Z"}
{"log":"29/Jul/2024:15:18:26 +0000 http 127.0.0.1:35486 \u003cNOSRV\u003e 0/-1/-1/-1/0 \"GET / HTTP/1.1\" 302 5/1/0/0/0 0/0\n","stream":"stdout","time":"2024-07-29T15:18:26.491679165Z"}

Any ideas?

Next update: The file didn’t increase much in the last 20 minutes.
Unfortunatelly I didn’t save the content of the 80 GB file before deleting it, but I’d like to know what could have happened/what is saved in this file, because it seems strange to me that a raw text file can become so large

Updates will follow while I check the size

Pinging @Rich here.

Btw, I imagine that you’ve removed the duplicate containers by now, right?
Also worth sharing more information on your deployment, and its configuration. :wink:

Hi martin, thanks for your answer
I absolutely didn’t touch anything :joy:

When it first happened I tried some stuff like restarting, pulling the newer version and I also did a prune
Well, that deleted everything in my instance, so it wasn’t really great, then I rolled back

The configuration is just the custom deployment from the guide, I just added some images/maps etc.
Would you like to see the yml/the manager_config?

Hi martin, quick followup:
My machine increased by 0.6% in 2 days. It’s 120 GB so it grew roughly 360 MB each day

I still have no idea why I have double containers, here is my yml

# OpenRemote v3
#
# Profile that runs the stack by default on https://localhost using a self-signed SSL certificate,
# but optionally on https://$OR_HOSTNAME with an auto generated SSL certificate from Letsencrypt.
#
# It is configured to use the AWS logging driver.
#
version: '2.4'

volumes:
  proxy-data:
  temp-data:
  postgresql-data:
#  btmesh-data:

services:

  proxy:
    image: openremote/proxy:${PROXY_VERSION:-latest}
    restart: always
    depends_on:
      manager:
        condition: service_healthy
    ports:
      - "80:80"
      - "${OR_SSL_PORT:-443}:443"
      - "8883:8883"
    volumes:
      - proxy-data:/deployment
      
    environment:
      LE_EMAIL: ${OR_EMAIL_ADMIN:-}
      DOMAINNAME: ${OR_HOSTNAME:-mydomain.com}
      DOMAINNAMES: ${OR_ADDITIONAL_HOSTNAMES:-}
      # USE A CUSTOM PROXY CONFIG - COPY FROM https://raw.githubusercontent.com/openremote/proxy/main/haproxy.cfg
      #HAPROXY_CONFIG: '/data/proxy/haproxy.cfg'

  postgresql:
    restart: always
    image: openremote/postgresql:${POSTGRESQL_VERSION:-latest}
    volumes:
      - postgresql-data:/var/lib/postgresql/data
      - temp-data:/tmp

  keycloak:
    restart: always
    image: openremote/keycloak:${KEYCLOAK_VERSION:-latest}
    depends_on:
      postgresql:
        condition: service_healthy
    volumes:
      - ./deployment:/deployment
    environment:
      KEYCLOAK_ADMIN_PASSWORD: ${OR_ADMIN_PASSWORD:-secret}
      KC_HOSTNAME: ${OR_HOSTNAME:-mydomain.com}
      KC_HOSTNAME_PORT: ${OR_SSL_PORT:--1}


  manager:
#    privileged: true
    restart: always
    image: openremote/manager:${MANAGER_VERSION:-latest}
    depends_on:
      keycloak:
        condition: service_healthy
    environment:
      OR_SETUP_TYPE:
      OR_ADMIN_PASSWORD: 
      OR_SETUP_RUN_ON_RESTART:
      OR_EMAIL_HOST: ${OR_EMAIL_HOST:-mymail}
      OR_EMAIL_USER: ${OR_EMAIL_USER:-mymail}
      OR_EMAIL_PASSWORD: ${OR_EMAIL_PASSWORD:-mypass}
      OR_EMAIL_PORT: ${OR_EMAIL_PORT:-587}
      OR_EMAIL_TLS: ${OR_EMAIL_TLS:-STARTTLS}
      OR_EMAIL_X_HEADERS:
      OR_EMAIL_FROM: ${OR_EMAIL_FROM:-mymail}
      OR_EMAIL_ADMIN: ${OR_EMAIL_ADMIN:-mymail}
      OR_HOSTNAME: ${OR_HOSTNAME:-mydomain.com}
      OR_ADDITIONAL_HOSTNAMES: ${OR_ADDITIONAL_HOSTNAMES:-}
      OR_SSL_PORT: ${OR_SSL_PORT:--1}
      OR_DEV_MODE: ${OR_DEV_MODE:-false}

      # The following variables will configure the demo
      OR_FORECAST_SOLAR_API_KEY:
      OR_OPEN_WEATHER_API_APP_ID:
      OR_SETUP_IMPORT_DEMO_AGENT_KNX:
      OR_SETUP_IMPORT_DEMO_AGENT_VELBUS:
    volumes:
      - temp-data:/tmp
      - ./deployment:/deployment
#      - /var/run/dbus:/var/run/dbus
#      # Bluetooth mesh volume
#      - btmesh-data:/btmesh
#   devices:
#     - /dev/ttyACM0:/dev/ttyS0

Hey,

Sorry this one slipped through my email pile :wink:

Looks like something is making lots of strange calls to the proxy on your loopback interface (so something else running on your host). You can edit the haproxy.cfg to change the logging for haproxy (volume map in a custom haproxy.cfg over the built in one).

I would also look into what is making all those calls.

Hi Rich, don’t worry, for now it seems better
Since 30 July (31.43%) now it’s 50%. It’s growing but slower and it deletes something on my weekly scheduled reboot

I still have these 2 double containers that I don’t really know where they come from - as you see from my yml above it should be ok

But last time I deleted one of them I couldn’t login anymore and I had to restore a backup, unfortunately I’m not skilled in docker, how can I know which is the correct postgres and which is the correct keycloak?

The prefix on the container names is key:

Looks like at some point you started the dev-testing profile with project name openremoted (i.e. you used -p openremoted).

You should be able to remove this by doing:

docker compose -p openremoted -f profile/dev-testing.yml down

Thanks for the reply, I have no idea about the dev-testing so no idea how that appeared :sweat_smile:

I tried your command but I get the following (note that I still use docker-compose, don’t know what changes)

docker compose -p openremoted -f profile/dev-testing.yml down




unknown shorthand flag: 'p' in -p
See 'docker --help'.

If I do it with docker-compose

 docker-compose -p openremoted -f profile/dev-testing.yml down
ERROR: .FileNotFoundError: [Errno 2] No such file or directory: './profile/dev-testing.yml'```

here is my tree if needed

root@openremote:~# ls
deployment docker-compose.yml node_modules package-lock.json package.json
root@openremote:~# tree deployment/
deployment/
├── keycloak
│ └── themes
│ └── openremote
│ ├── account
│ │ ├── account.ftl
│ │ ├── applications.ftl
│ │ ├── federatedIdentity.ftl
│ │ ├── log.ftl
│ │ ├── password.ftl
│ │ ├── resources
│ │ │ ├── css
│ │ │ │ ├── MaterialIcons-Regular.eot
│ │ │ │ ├── MaterialIcons-Regular.ijmap
│ │ │ │ ├── MaterialIcons-Regular.svg
│ │ │ │ ├── MaterialIcons-Regular.ttf
│ │ │ │ ├── MaterialIcons-Regular.woff
│ │ │ │ ├── MaterialIcons-Regular.woff2
│ │ │ │ ├── materialize.min.css
│ │ │ │ └── styles.css
│ │ │ ├── img
│ │ │ │ ├── favicon.png
│ │ │ │ └── logo.png
│ │ │ └── js
│ │ │ └── materialize.min.js
│ │ ├── sessions.ftl
│ │ ├── template.ftl
│ │ ├── theme.properties
│ │ └── totp.ftl
│ ├── email
│ │ ├── html
│ │ │ └── password-reset.ftl
│ │ └── theme.properties
│ └── login
│ ├── error.ftl
│ ├── login-reset-password.ftl
│ ├── login-update-password.ftl
│ ├── login.ftl
│ ├── messages
│ │ └── messages_en.properties
│ ├── register.ftl
│ ├── resources
│ │ ├── css
│ │ │ ├── MaterialIcons-Regular.eot
│ │ │ ├── MaterialIcons-Regular.ijmap
│ │ │ ├── MaterialIcons-Regular.svg
│ │ │ ├── MaterialIcons-Regular.ttf
│ │ │ ├── MaterialIcons-Regular.woff
│ │ │ ├── MaterialIcons-Regular.woff2
│ │ │ ├── materialize.min.css
│ │ │ └── styles.css
│ │ ├── img
│ │ │ ├── favicon.png
│ │ │ └── logo.png
│ │ └── js
│ │ └── materialize.min.js
│ ├── template.ftl
│ └── theme.properties
├── manager
│ ├── app
│ │ ├── images
│ │ │ ├── realm1
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ ├── default
│ │ │ │ └── logo.png
│ │ │ ├── demo
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ ├── realm2
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ ├── favicon.png
│ │ │ ├── logoMobile.png
│ │ │ ├── logo.png
│ │ │ ├── realm3
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ ├── realm4
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ ├── realm5
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ ├── realm6
│ │ │ │ ├── favicon.png
│ │ │ │ ├── logo.png
│ │ │ │ └── logoMobile.png
│ │ │ └── realm7
│ │ │ ├── favicon.png
│ │ │ ├── logo.png
│ │ │ └── logoMobile.png
│ │ └── manager_config.json
│ ├── consoleappconfig
│ │ ├── realm1.json
│ │ ├── realm2.json
│ │ ├── master.json
│ │ ├── realm3.json
│ │ ├── realm4.json
│ │ ├── realm5.json
│ │ └── realm6.json
│ └── keycloak.json
├── map
│ ├── italy.mbtiles
│ ├── mapdata.mbtiles
│ └── mapsettings.json.save
├── openremote.log.0
├── openremote.log.0.lck
├── openremote.log.1
├── openremote.log.2
├── openremote.log.3
├── openremote.log.4
├── openremote.log.5
├── openremote.log.6
├── openremote.log.7
├── openremote.log.8
└── openremote.log.9

30 directories, 92 files

docker compose -p openremoted down
1 Like

Thanks Rich
I did it with docker-compose (I suppose I don’t have docker compose ? )
I tested also with a reboot and the containers aren’t on
Thank you!