OR_HOSTNAME=“fleet.cro…dfd…ct.eu” OR_EMAIL_ADMIN=“webmaster@co…t.me” OR_ADMIN_PASSWORD=“ch…somesceret9*” docker-compose -p fleet-management up -d
machine connects to the internet - letsencrypt is getting me the certs, which I copy from the proxy container as in the docs (difference being that fullchain is only 2 instead of 3 certs)
these creds aren’t working?
Username: admin
Password: secret
docker logs are clean - not showing any issue . The whole setup process was a snap actually - my compliments for such well prepared launchpad - - and it’s certainly now my fault that I can’t get any further here:
When logging in with the admin user into OpenRemote, you should use the OR_ADMIN_PASSWORD that you set above. So on the login screen, use username admin and password ch…somesceret9*. Make sure that you dont use quotes in your environment variables if they do not contain any spaces (and the one you mention don’t), since that could be something that is also messing with the values you’re entering.
Yepp the project indeed is remarkable and given the little experience I have I think I got forward quite neatly. with the dockerized services running flawlessly. To me it seems that in some parts the docs are not quite systematic anough to e.g. enlighten me (nad maybe it’s just me ) for the technical details to make the puzzle pieces really fit together…
e.g. …creating the certificate for uploading to the Teltonika device. While I think I do understand that you need the privkey, the cert and the ca (or the chain) to really make a cert trustworthy, I see that the frontend’s WebGUI is NOT considered safe by the browser (edge) and I’m struglling with - to use the metaphore of the jigsaw puzzle again - putting the pieces together so the Teltonika tracker likes it (suufix .pem is ALWAYS a CA for it, suffix .pem.crt is ALWAYS a Cert and .pem.key is ALWAYS a private key) - I tried to put the dual fullchain together with the cert to make it a really full chain, but still am getting this log manager_1 | WARNING [Thread-2 (ActiveMQ-serve..ost)] org.apache.activemq.artemis.core.server : AMQ222216: Security problem while authenticating: AMQ229031: Unable to validate user from 172.18.0.5:39716. Username: fleet.cropsterconnect.eu/864636066749652; SSL certificate subject DN: unavailable
the good news is that it’s the device’s IMEI so I’m getting to the proxy’s 8883 port and routed to the openremote manager already
e.g. systematically setting up the device as an asset is something I’d like to learn a bit more about to understand it.
maybe you find the time to send me some links to look it up? Cheers,
There’s a couple of issues with what you’ve said so far; from what I understand, you are including a client key and certificate in your security configuration for the Teltonika device you’re using. That’s not something that you should do, as we are currently only using the CA certificate for SSL. Please make sure that you only use the reversed certificate chain (the quick-start may be helpful for you here). That issue comes up because OpenRemote is thinking that you’re trying to authenticate as an MQTT user based on those certificates.
If you have reversed the certificate chain correctly (which I believe that you did, since the connection request passed through the reverse proxy and reached the manager), then you only need to remove the client certificate and client key on the security tab of your Teltonika configurator. You should then be able to connect to OpenRemote properly.
Hi @willi@panos can you please help me set it up.
I am constantly facing the errors of manager container unhealthy when I do docker-compose up.
“dependency failed to start: container openremote-fleet-manager-1 is unhealthy”
This is my docker-compose file :
# OpenRemote v3
#
# Profile for deploying the custom stack; uses deployment-data named volume
# to expose customisations to the manager and keycloak images. To run this profile you need to specify the following
# environment variables:
#
# OR_ADMIN_PASSWORD - Initial admin user password
# OR_HOSTNAME - FQDN hostname of where this instance will be exposed (localhost, IP address or public domain)
# DEPLOYMENT_VERSION - Tag to use for deployment image (must match the tag used when building the deployment image)
#
# Please see openremote/profile/deploy.yml for configuration details for each service.
#
# To perform updates, build code and prepare Docker images:
#
# ./gradlew clean installDist
#
# Then recreate deployment image:
#
# DEPLOYMENT_VERSION=$(git rev-parse --short HEAD)
# MANAGER_VERSION=$(cd openremote; git rev-parse --short HEAD; cd ..)
# docker build -t openremote/manager:$MANAGER_VERSION ./openremote/manager/build/install/manager/
# docker build -t openremote/custom-deployment:$DEPLOYMENT_VERSION ./deployment/build/
# docker-compose -p custom down
# docker volume rm custom_deployment-data
# Do the following volume rm command if you want a clean install (wipe all existing data)
# docker volume rm custom_postgresql-data
# OR_ADMIN_PASSWORD=secret OR_HOSTNAME=my.domain.com docker-compose -p custom up -d
#
# All data is kept in volumes. Create a backup of the volumes to preserve data.
#
version: '2.4'
volumes:
proxy-data:
deployment-data:
postgresql-data:
manager-data:
services:
# Populate deployment-data on startup (only on empty volume)
deployment:
image: pankalog/fleet-deployment:${DEPLOYMENT_VERSION:-latest}
volumes:
- deployment-data:/deployment
networks:
- openremote_net
proxy:
image: openremote/proxy:${PROXY_VERSION:-latest}
restart: always
depends_on:
manager:
condition: service_healthy
# Bind proxy ports to localhost so the host nginx can proxy to them for TLS,
# reducing direct public exposure of the container. Change if you prefer direct exposure.
ports:
- "8080:80"
- "8443:443"
- "8883:8883"
volumes:
- proxy-data:/deployment
- deployment-data:/data
environment:
LE_EMAIL: ${OR_EMAIL_ADMIN}
DOMAINNAME: ${OR_HOSTNAME?OR_HOSTNAME must be set}
DOMAINNAMES: ${OR_ADDITIONAL_HOSTNAMES:-}
# USE A CUSTOM PROXY CONFIG - COPY FROM https://github.com/openremote/proxy/blob/main/haproxy.cfg
#HAPROXY_CONFIG: '/data/proxy/haproxy.cfg'
restart: always
networks:
- openremote_net
postgresql:
image: openremote/postgresql:${POSTGRESQL_VERSION:-latest}
restart: always
volumes:
- postgresql-data:/var/lib/postgresql/data
- manager-data:/storage
networks:
- openremote_net
keycloak:
image: openremote/keycloak:${KEYCLOAK_VERSION:-latest}
restart: always
depends_on:
postgresql:
condition: service_healthy
volumes:
- deployment-data:/deployment
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: ${OR_ADMIN_PASSWORD:?OR_ADMIN_PASSWORD must be set}
KC_HOSTNAME: ${OR_HOSTNAME:-localhost}
#KC_HOSTNAME_PORT: ${OR_SSL_PORT:--1}
restart: always
networks:
- openremote_net
manager:
image: openremote/manager:${MANAGER_VERSION:-latest}
restart: always
depends_on:
keycloak:
condition: service_healthy
volumes:
- manager-data:/storage
- deployment-data:/deployment
# Map data should be accessed from a volume mount
# 1). Host filesystem - /deployment.local:/deployment.local
# 2) NFS/EFS network mount - efs-data:/efs
environment:
# Here are some typical environment variables you want to set
# see openremote/profile/deploy.yml for details
OR_ADMIN_PASSWORD: ${OR_ADMIN_PASSWORD?OR_ADMIN_PASSWORD must be set}
OR_SETUP_TYPE: # Typical values to support are staging and production
OR_SETUP_RUN_ON_RESTART:
OR_EMAIL_HOST:
OR_EMAIL_USER:
OR_EMAIL_PASSWORD:
OR_EMAIL_X_HEADERS:
OR_EMAIL_FROM:
OR_EMAIL_ADMIN:
OR_HOSTNAME: ${OR_HOSTNAME?OR_HOSTNAME must be set}
OR_ADDITIONAL_HOSTNAMES: ${OR_ADDITIONAL_HOSTNAMES:-}
#OR_SSL_PORT: ${OR_SSL_PORT:--1}
OR_DEV_MODE: ${OR_DEV_MODE:-false}
OR_MAP_TILES_PATH: '/efs/europe.mbtiles'
#OR_MAP_TILES_PATH: '/efs/europe.mbtiles'
networks:
- openremote_net
# Example: NFS volume commented out (leave as-is if you later enable)
# efs-data:
# driver: local
# driver_opts:
# type: nfs
# o: "addr=${EFS_DNS?DNS must be set to mount NFS volume},rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
# device: ":/"
# Add any logging driver overrides here if you need centralized logging
# x-logging: &awslogs
# logging:
# driver: awslogs
# options:
# awslogs-region: ${AWS_REGION:-eu-west-1}
# awslogs-group: ${OR_HOSTNAME}
# awslogs-create-group: 'true'
# tag: "{{.Name}}/{{.ID}}"
networks:
# Reuse an external network so other stacks (e.g., smart-city, custom dashboards) can join
openremote_net:
external: true
I’ll need to see the Docker container logs to understand what the issue is, as I told you in the GitHub issue you created. I see you’re using a custom network for connectivity, which I’m not sure if it could mess with inter-container communication for OpenRemote’s services.
As I told you yesterday, I can only help you if you provide Docker container logs, so I hope you can send them over and get your issue fixed!
Great support, @panos - sorry for the delay in getting back to you. Yes, server side-only TLS with the CA cert installed on the device works. I was irritated about the fullchain. You don’t even need a signed CA from letsencrypt a self signed (created from openssl) will do the trick, of course then …Again a great thanks! Keep on doing such a great shop with such a complex beast of app trying to tie in all sort of devices!