Deploy openremote on Gcloud VM

Everything working fine on local but when I am deploying the open remote on VM facing some issues:

  • I am able to see the login page
  • After entering username [admin] password [secret] some APIs getting failed.
Request URL: https://31.221.220.169/api/master/console/register
Request Method: POST
Status Code: 403 Forbidden

Error: Origin not allowed

I am pasting my docker-compose.yml file code here for reference.

# OpenRemote v3
#
# Profile that runs the stack by default on https://localhost using a self-signed SSL certificate,
# but optionally on https://$OR_HOSTNAME with an auto generated SSL certificate from Letsencrypt.
#
# It is configured to use the AWS logging driver.
#
version: '2.4'

services:

  proxy:
    image: openremote/proxy:${PROXY_VERSION:-latest}
    restart: always
    depends_on:
      manager:
        condition: service_healthy
    ports:
      - "80:80"
      - "${OR_SSL_PORT:-443}:443"
      - "8883:8883"

  postgresql:
    restart: always
    image: openremote/postgresql:${POSTGRESQL_VERSION:-latest}

  keycloak:
    restart: always
    image: openremote/keycloak:${KEYCLOAK_VERSION:-latest}
    depends_on:
      postgresql:
        condition: service_healthy
    environment:
      KEYCLOAK_PASSWORD: ${OR_ADMIN_PASSWORD:-secret}
      KEYCLOAK_FRONTEND_URL: https://${OR_HOSTNAME:-localhost}/auth
      # Use the following if OR_SSL_PORT is not the default 443
      # KEYCLOAK_FRONTEND_URL: https://${OR_HOSTNAME:-localhost}:${OR_SSL_PORT:-443}/auth

  manager:
#    privileged: true
    restart: always
    image: openremote/manager:${MANAGER_VERSION:-latest}
    depends_on:
      keycloak:
        condition: service_healthy
    environment:
      OR_DEV_MODE: ${OR_DEV_MODE:-false}
    volumes:
      - ./deployment:/deployment
#      - /var/run/dbus:/var/run/dbus
#      - btmesh-data:/btmesh
#   devices:
#     - /dev/ttyACM0:/dev/ttyS0

apart from this, seeing some web socket error in browser console:

WebSocket connection to 'wss://31.221.220.169/websocket/events?Realm=master&Authorization=Bearer%20eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJ6ajNwOTlySEFUc3RyYXNud2r0PpZRlhzTb0KjKrVagLpmuNqg1EMhTeqvN1EsfmvcsRhpqa46Dbzr3p5soB1Y38ZAY1LmFerlNDlrK_6WoYL_u22lt3XL9Gys-EVk1Ut2-YIlQBz_BDI3-TRt9dGci_CR2faf3_XIasPE8rBrm-7zCsWV8vBQoQkKjSoLjV1uEW7A8sUP8SThOz-ZabQcFh-K22SAMx82jiF30_iFQ2HFxa7_Rs34lelAKug' failed: 
1 Like

Hi,

You need to specify the OR_HOSTNAME environment variable for the manager service and KEYCLOAK_FRONTEND_URL for the keycloak service.

Also the DOMAINNAME environment variable for the proxy service (if you want automatic SSL certificate generation rather than a self signed SSL certificate).

Please refer to the default docker-compose.yml and/or refer to the deploy.yml which provides some explanation of the different environment variables.

# OpenRemote v3
#
# Profile that runs the stack by default on https://localhost using a self-signed SSL certificate,
# but optionally on https://$OR_HOSTNAME with an auto generated SSL certificate from Letsencrypt.
#
# It is configured to use the AWS logging driver.
#
version: '2.4'

volumes:
  proxy-data:
  temp-data:
  postgresql-data:
  btmesh-data:
  deployment:

services:

  proxy:
    image: openremote/proxy:${PROXY_VERSION:-latest}
    restart: always
    depends_on:
      manager:
        condition: service_healthy
    ports:
      - "80:80"
      - "${OR_SSL_PORT:-443}:443"
      - "8883:8883"
    volumes:
      - proxy-data:/deployment
    environment:
      LE_EMAIL: ${OR_EMAIL_ADMIN:-}
      DOMAINNAME: ${OR_HOSTNAME:-localhost}
      DOMAINNAMES: ${OR_ADDITIONAL_HOSTNAMES:-}
      # USE A CUSTOM PROXY CONFIG - COPY FROM https://github.com/openremote/proxy/blob/main/haproxy.cfg
      #HAPROXY_CONFIG: '/data/proxy/haproxy.cfg'

  postgresql:
    restart: always
    image: openremote/postgresql:${POSTGRESQL_VERSION:-latest}
    volumes:
      - postgresql-data:/var/lib/postgresql/data
      - temp-data:/tmp

  keycloak:
    restart: always
    image: openremote/keycloak:${KEYCLOAK_VERSION:-latest}
    depends_on:
      postgresql:
        condition: service_healthy
    volumes:
      - ./deployment:/deployment
    environment:
      KEYCLOAK_PASSWORD: ${OR_ADMIN_PASSWORD:-secret}
      KEYCLOAK_FRONTEND_URL: https://${OR_HOSTNAME:-localhost}/auth
      # Use the following if OR_SSL_PORT is not the default 443
      #KEYCLOAK_FRONTEND_URL: https://${OR_HOSTNAME:-localhost}:${OR_SSL_PORT:-443}/auth

  manager:
#    privileged: true
    restart: always
    image: openremote/manager:${MANAGER_VERSION:-latest}
    depends_on:
      keycloak:
        condition: service_healthy
    environment:
      OR_SETUP_TYPE:
      OR_ADMIN_PASSWORD:
      OR_SETUP_RUN_ON_RESTART:
      OR_EMAIL_HOST:
      OR_EMAIL_USER:
      OR_EMAIL_PASSWORD:
      OR_EMAIL_X_HEADERS:
      OR_EMAIL_FROM:
      OR_EMAIL_ADMIN:
      OR_HOSTNAME: ${OR_HOSTNAME:-localhost}
      OR_ADDITIONAL_HOSTNAMES: ${OR_ADDITIONAL_HOSTNAMES:-}
      OR_SSL_PORT: ${OR_SSL_PORT:-443}
      OR_DEV_MODE: ${OR_DEV_MODE:-false}
      KEYCLOAK_FRONTEND_URL: https://${OR_HOSTNAME:-localhost}/auth
      # Use the following if OR_SSL_PORT is not the default 443
      #KEYCLOAK_FRONTEND_URL: https://${OR_HOSTNAME:-localhost}:${OR_SSL_PORT:-443}/auth

      # The following variables will configure the demo
      OR_FORECAST_SOLAR_API_KEY:
      OR_OPEN_WEATHER_API_APP_ID:
      OR_SETUP_IMPORT_DEMO_AGENT_KNX:
      OR_SETUP_IMPORT_DEMO_AGENT_VELBUS:
    volumes:
      - ./deployment:/deployment
#      - /var/run/dbus:/var/run/dbus
#      - btmesh-data:/btmesh
#   devices:
#     - /dev/ttyACM0:/dev/ttyS0

I have updated it but didn’t get any success.
The issue mentioned above has been resolved but now map API returning 204 no content.

Manager container log on local:

org.openremote.manager.map.MapService : Starting map service with tile data: /deployment/map/mapdata.mbtiles

Manger container log on VM[GCP]:

org.openremote.manager.map.MapService : Map meta data could not be loaded, map functionality will not work

@rich should I buy the maptiles for production use?

2022-04-29 15:08:47.048  INFO    [main                          ] org.openremote.manager.map.MapService    : Starting map service with tile data: /deployment/map/mapdata.mbtiles
2022-04-29 15:08:47.136  SEVERE  [main                          ] org.openremote.manager.map.MapService    : Failed to get metadata from mbtiles DB
org.sqlite.SQLiteException: [SQLITE_NOTADB]  File opened that is not a database file (file is not a database)
        at org.sqlite.core.DB.newSQLException(DB.java:1012)
        at org.sqlite.core.DB.newSQLException(DB.java:1024)
        at org.sqlite.core.DB.throwex(DB.java:989)
        at org.sqlite.core.NativeDB.prepare_utf8(Native Method)
        at org.sqlite.core.NativeDB.prepare(NativeDB.java:134)
        at org.sqlite.core.DB.prepare(DB.java:257)
        at org.sqlite.core.CorePreparedStatement.<init>(CorePreparedStatement.java:45)
        at org.sqlite.jdbc3.JDBC3PreparedStatement.<init>(JDBC3PreparedStatement.java:30)
        at org.sqlite.jdbc4.JDBC4PreparedStatement.<init>(JDBC4PreparedStatement.java:25)
        at org.sqlite.jdbc4.JDBC4Connection.prepareStatement(JDBC4Connection.java:35)
        at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:241)
        at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:205)
        at org.openremote.manager.map.MapService.getMetadata(MapService.java:88)
        at org.openremote.manager.map.MapService.start(MapService.java:235)
        at org.openremote.container.Container.start(Container.java:168)
        at org.openremote.container.Container.startBackground(Container.java:209)
        at org.openremote.manager.Main.main(Main.java:31)
2022-04-29 15:08:47.137  WARNING [main                          ] org.openremote.manager.map.MapService    : Map meta data could not be loaded, map functionality will not work

@rich got the solution.

Maptile won’t work if we commit using git lfs.

GitHub blocks pushes that exceed 100 MB.
https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-large-files-on-github

Generally the size of maptiles files we use is around 500-700MB [It’s depends on the area]. For large file we have to use git LFS[Large file storage] and if we use git LFS open remote map won’t work, it give the error File opened that is not a database file (file is not a database) .

1 Like

We don’t store mbtiles files in repos as they aren’t designed for large binary files. We manage these files outside the docker images as well and volume map them into the manager container.

I got it.
But this should be mentioned in the documentation in the troubleshooting section.
It will save a lot of time for other dev.

We are open source and PRs improving documentation are most welcome.

1 Like