Publishing OpenRemote on to server using PuTTy

Hello, I am trying to publish OpenRemote on to a server so that other users will be able to access it using its I.P. address.

Following how I downloaded OpenRemote on my machine, I followed the same steps and was able to access it through the I.P. address.

However, this was the log in page which looks different than usual.

And in the assets page, I was unable to add new assets. The more options tab also does not seem to have as many selection as the one I have in my local host.

There was not any errors the ‘docker-compose pull’ and ‘docker compose -p openremote up’. The only part I had changed in the .yml file was the OR_HOSTNAME: to my I.P. address under Manager, Environment.

I am quite lost on what to change in order to get the same ‘version’ as the one on my machine, as I would like to be able to add Assets etc.

Also, how should I go about if I want to have my own docker images with my own assets added into it? From my understanding, I have to push the image into the docker registry and pull the images to the place I want then create a container based on the images? Please correct my understanding if I am mistaken. I apologise for my lack of understanding of dockers as I am very inexperienced.

Thank you for your patience and help.

1 Like

Hi,

That login page is a fallback when the manager UI cannot reach keycloak; that tells me that you have an issue with your container config. You will see OR_HOSTNAME is referenced multiple times in the default docker-compose.yml file and you should just start the stack providing a value for that variable rather than replacing one of the values in the docker compose file.

e.g. OR_HOSTNAME=10.0.0.0 docker-compose up -d

Hi Rich, thank you for the reply. I have heed your advice and provided the ip address at the start and this is the outcome.

The log in page was also normal.

However, after clicking sign in, a 503 error occurs.

I have tried removing the containers and trying again, but the results were still the same. More guidance would be very much appreciated. Thank you once again.

I assume the 503 error is being displayed when the url ends with /manager this suggests your manager container is unhealthy, check the docker logs for this service.

Hi Rich,

Apologies for the late reply and thank you for your patience. The URL is https://localhost/auth/realms/master/login-actions/authenticate?session_code=d77lp18LxoudJbc65XUI5Nb0xwtfuow_BkY6CaLjjuE&execution=349e76db-9133-412e-93b1-7f2d7eea54bb&client_id=openremote&tab_id=_NPw0GzlBkU.

The status of my containers seem healthy:

Most of the logs of the manager container were info, except for a few warnings and severes:

2022-07-27 11:25:18.083 WARNING [main ] org.openremote.manager.map.MapService : Map tiles data file not found ‘/deployment.local/mapdata/mapdata.mbtiles’, falling back to built in map

2022-07-27 11:25:18.108 WARNING [main ] ger.notification.PushNotificationHandler : OR_FIREBASE_CONFIG_FILE invalid path or file not readable: /deployment/manager/fcm.json

2022-07-27 11:25:40.838 WARNING [main ] ger.notification.PushNotificationHandler : FCM configuration invalid so cannot start

2022-07-27 23:44:23.304 SEVERE [nioEventLoopGroup-3-1 ] io.moquette.broker.NewNettyMQTTHandler : Unexpected exception while processing MQTT message. Closing Netty channel. CId=null

2022-07-27 23:44:31.020 SEVERE [nioEventLoopGroup-3-3 ] io.moquette.broker.NewNettyMQTTHandler : Unexpected exception while processing MQTT message. Closing Netty channel. CId=null

2022-07-28 04:04:29.374 WARNING [nioEventLoopGroup-3-4 ] io.moquette.broker.Authorizator : Client does not have read permissions on the topic username: null, messageId: 4386, topic: #

2022-07-28 14:23:02.032 WARNING [nioEventLoopGroup-3-2 ] io.moquette.broker.Authorizator : Client does not have read permissions on the topic username: null, messageId: 4386, topic: #

These was gathered using the command ‘docker logs 7cd9bebe2983’, which 7cd9bebe2983 is my container ID. I hope that this is useful information and I hope to hear from you soon. Thanks once again.

Have you changed the service names in the docker compose file? The proxy container config file haproxy.cfg forwards /manager to manager:8080 if you change service names then you need to change the proxy environment variables:

MANAGER_HOST
KEYCLOAK_HOST

Hi Rich,

No, nothing has been changed about the docker compose file. Since you said this,

I have changed the docker compose file back to what it was, and used this command to start my containers. I am still getting error 503.

Another thing, everytime I start up the containers, a deployment folder is created but there is never anything inside. Not sure if this is relevant or useful but I just thought I should mention that too.

I have tried deleting all images and containers and restarting again, including deleting this folder as well but nothing has worked out so far. I know that the deployment folder is suppose to contain the manager and map folders but there are never created.