Manager: "java.lang.RuntimeException: Credentials don't work so cannot continue"

This is in a Docker deployment. It was working yesterday, but today when I try to start it, the Manager cannot start; it says the credentials don’t work.

Are these the admin credentials? If so, it’s perhaps because I changed them earlier, although I’d like to think it recorded the new ones! (I originally thought this was in containers created yesterday, but actually it was only this morning, when it was all working fine!)

Is there anything I can do to fix this, or shall I discard my containers and start again?

(I vaguely recall mention of something like this in another thread, but that was in passing, so I thought it deserved its own thread.)

kindly share supporting scrrenshot and error of the manager login page .

I didn’t even get as far as the manager login page because the manager wouldn’t start.

The error was in the manager log. Here’s a bit of context…

rohitestdeploy-manager-1     | 2023-04-05 09:32:10.845  INFO    [main                          ] curity.keycloak.KeycloakIdentityProvider : No stored credentials so using OR_ADMIN_PASSWORD
rohitestdeploy-manager-1     | 2023-04-05 09:32:10.848  INFO    [main                          ] curity.keycloak.KeycloakIdentityProvider : Keycloak proxy URI set to: http://keycloak:8080/auth
rohitestdeploy-manager-1     | 2023-04-05 09:32:10.848  INFO    [main                          ] curity.keycloak.KeycloakIdentityProvider : Validating keycloak credentials
rohitestdeploy-keycloak-1    | 2023-04-05 09:32:11,014 WARN  [] (executor-thread-0) type=LOGIN_ERROR, realmId=2af41d57-863e-4aef-abd5-1deca6dd8bfa, clientId=admin-cli, userId=5ba399fb-0459-452a-8147-269a82cb3cba, ipAddress=, error=invalid_user_credentials, auth_method=openid-connect, grant_type=password, client_auth_method=client-secret, username=admin, authSessionParentId=dddcc71a-86ab-4829-b068-f3b30712c68c, authSessionTabId=mhZhSxOAUic
rohitestdeploy-manager-1     | 2023-04-05 09:32:11.017  WARNING [main                          ] emote.container.web.OAuthFilter.PROTOCOL : OAuth server response error: 401
rohitestdeploy-manager-1     | 2023-04-05 09:32:11.051  INFO    [main                          ] curity.keycloak.KeycloakIdentityProvider : Credentials are invalid
rohitestdeploy-manager-1     | 2023-04-05 09:32:11.052  WARNING [main                          ] curity.keycloak.KeycloakIdentityProvider : Credentials don't work so cannot continue
rohitestdeploy-manager-1     | 2023-04-05 09:32:11.052  SEVERE  [main                          ] org.openremote.container.Container       : >>> Runtime container startup failed
rohitestdeploy-manager-1     | java.lang.RuntimeException: Credentials don't work so cannot continue
rohitestdeploy-manager-1     | 	at
rohitestdeploy-manager-1     | 	at
rohitestdeploy-manager-1     | 	at
rohitestdeploy-manager-1     | 	at org.openremote.container.Container.start(
rohitestdeploy-manager-1     | 	at org.openremote.container.Container.startBackground(
rohitestdeploy-manager-1     | 	at org.openremote.manager.Main.main(
dependency failed to start: container for service "manager" is unhealthy

Does that help at all?

Your manager is not healthy, kindly share ym file

The docker-compose.yml file, you mean?

I can do, if you need it, but as mentioned, the configuration was working earlier today so the file did work…

“dependency failed to start: container for service “manager” is unhealthy”
This is your log details , pls see the error thats why share your docker-compose.yml file

I’m happy to share the file; I was merely pointing out that the file was working but apparently now isn’t!

Evidently this forum’s configuration needs to be changed as it won’t let me upload my YAML file. :roll_eyes:

I’ve renamed it to be a .txt file and uploaded it:
docker-compose.yml.txt (2.8 KB)

It’s based upon the one from Github, but I’ve changed a few things, like allowing image versions to be specified using SHA256 sums (since as far as I can tell, the only tag ever used on Docker Hub for OpenRemote images is latest, so you can’t use specific versions any other way).

version: ‘2.4’


restart: always
image: openremote/postgresql:${POSTGRESQL_VERSION:-latest}
- postgresql-data:/var/lib/postgresql/data
- manager-data:/storage

in manager volume

- manager-data:/storage
- ./deployment:/deployment

you need to change this option .


Thanks. Presumably I have to discard my existing containers and start from scratch as the data that needs to be shared between postgresql and manager is not?

Hmm, I’ve made those changes and deployed a brand new instance with a different project name. I stopped it and started it again, and it too is failing in the same way.

Is the manager-data bit the only required change?

(UPDATE: I’ve repeated the exercise with a new project name, and the result is again the same. I created the containers, logged in, changed the admin password, waited a bit, stopped the containers, started the containers, and it failed again with the error that’s the subject of this thread.)

I’ve worked it out.

The problem was (as I feared) the fact that I’d changed the password.

I’ll skip over that it seems rather odd to use the user’s admin account for internal communication.

Apparently when I changed my password, it didn’t share and/or store the new password properly, so when it was restarted, it tried to use the old one, panicked, and exited. It must have stored it somewhere properly, since this problem wouldn’t exist if it completely failed to set the new password!

So now whenever I start the containers, I first have to set the OR_ADMIN_PASSWORD environment variable. I am certain that this wasn’t previously the case, as I’ve been using the same admin password for weeks now.

1 Like

Hi Richard,

Thanks for your report. Rich will be able to clarify this. He is off for a bit, but I will ask him to respond here when he is back.


Thanks @Don. After some further tests (I’m relatively new to docker compose), it turns out I only have to provide the OR_ADMIN_PASSWORD when using docker compose up, so if I simply restart them with docker compose start, I don’t need that, so it’s less painful than I feared. I am still curious about the shared use of usernames, though!

Hi @Don, any update on this?

Usernames are stored in keycloak (the identity provider). The admin user is the default super user created by keycloak when it first initialises with a clean DB; the OR_ADMIN_PASSWORD environment variable tells the manager what this password is so it can communicate with keycloak (this is how you can create users, realms and set roles in the manager UI).

When the manager starts it actually looks for credentials stored in a file defined by OR_KEYCLOAK_GRANT_FILE which defaults to /deployment/manager/keycloak.json and it will try and use these credentials to communicate with keycloak.

If that fails it will try communicating using the admin user and OR_ADMIN_PASSWORD; if this succeeds then a new set of credentials are generated with a username of keyloak-manager and these are stored in the OR_KEYCLOAK_GRANT_FILE path for next startup.

The idea behind OR_KEYCLOAK_GRANT_FILE is to allow you to change the admin user password in the UI without completely breaking the system (docker OR_ADMIN_PASSWORD environment variable would not be updated when you change the admin user password so there would be a mismatch).

So at startup; provided you haven’t deleted the postgres (DB) docker volume, the OR_KEYCLOAK_GRANT_FILE is still present and the credentials within are still valid (i.e. the user hasn’t been deleted from Keycloak) then the fact that the OR_ADMIN_PASSWORD no longer matches the actual keycloak admin password should not matter.

Are you using a deployment docker volume and mapping it to /deployment' within the manager container..if so this is likely the reason the OR_KEYCLOAK_GRANT_FILEgets removed and then falling back toOR_ADMIN_PASSWORD` fails.

I need to look to move the OR_KEYCLOAK_GRANT_FILE into the new OR_STORAGE_DIR so the deployment can be replaced without losing the credentials.

Thanks @Rich. I am using a docker configuration where /deployment is mapped to a local directory. I think the problem was that the manager directory didn’t exist inside there! If it’s not there, it doesn’t create it and so can’t put the JSON file there. This time, I created it before starting the containers and the keycloak.json file magically appeared!

Is the password encoded somehow in the JSON? It is printable text but is not the same as the admin password, yet it (the configuration) seems to work!

Seems we have got to the bottom of problem then and most likely our code that stores the keycloak.json file expects the path to exist (it always does for our own deployments which follow the custom project deployment image structure.

Will make a note to look at improving that or at least move it to the OR_STORAGE_DIR as already mentioned.

FYI: The password in keycloak.json is random and auto generated, the credentials aren’t meant to be used by any user and have no correlation with OR_ADMIN_PASSWORD

Ah OK, thanks. I must have misunderstood what the JSON file was for! I thought you meant it’s there to allow the credentials for the admin user to be available for inter-container communication. I’ll re-read what you wrote! :slight_smile:

EDIT: Ah yes, my bad, sorry! I see now the username manager-keycloak is in that JSON file!