How to use the /asset/attributes API?

Hi all,
I’m interested in sending data to OR via your API, I’ve been able to do this successfully using /asset/{assetId}/attribute/{attributeName}, but I’d like to send data to more than one attribute. I don’t know if this is possible. I thought that maybe /asset/attributes could do it but I can’t use it correctly, it returns a ‘failure’: ‘INSUFFICIENT_ACCESS’.
In the documentation for the request body says:
[
{
“ref”: {
“id”: “string”,
“name”: “string”
},
“value”: {}
}
]

What is “id” and “name” exactly? And can someone provide an example of what should go in the ‘value’ field?
Thanks in advance

Hi @pacogam!
For the id you can use the Asset ID and the name is the attribute name.
To fix the insufficient access error make sure you also set the “Authorization” header in your request.

So a request like this can be used to update two attributes of the same asset:

curl -X 'PUT' \
  'http://localhost:8080/api/master/asset/attributes' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer ...' \
  -H 'Content-Type: application/json' \
  -d '[
  {
    "ref": {
      "id": "6P7JYfXWpy66Vchm64RKAx",
      "name": "notes"
    },
    "value": "note 1"
  },
  {
    "ref": {
      "id": "6P7JYfXWpy66Vchm64RKAx",
      "name": "test"
    },
    "value": "test 1"
  }

]'

There is now also a PR to improve the documentation for these fields:

Thanks for your response. Unfortunately I’ve not able to use the API appropriately yet. If you allow me, I’ll give a concrete example of my problem:
The agent with Id “62qwBxrRNBOGPAttv0XQqF” contains a custom variable (type number) called “energy”.
Externally, I calculate the energy forecasting (or whatever), and I want to store the result of that time series in the variable mentioned above.
Then, I have as input a time series like this:

[
    {
        "x": 1727564400000,
        "y": 8656.573
    },
    {
        "x": 1727568000000,
        "y": 7500.424
    },
    {
        "x": 1727571600000,
        "y": 7294.939
    }
....
]

I made some attempts using the following endpoints:

  • /asset/:asset_id/attribute/:attribute_id (where :asset_i=62qwBxrRNBOGPAttv0XQqF and :attribute_id=energy)
  • /asset/attributes

But I always get the same result:

[
	{
		"ref": {
			"id": "62qwBxrRNBOGPAttv0XQqF",
			"name": "energy"
		},
		"failure": "INVALID_VALUE"
	}
]

Although with the second endpoint the body message is different, something like this:

[
  {
    "ref": {
      "id": "62qwBxrRNBOGPAttv0XQqF",
      "name": "energy"
    },
    "value": {
      "x": 1727683200000,
      "y": 8656.573
    }
  }
]

I’m wondering if I don’t know how to format the payload correctly or if there is no endpoint that allows me to do what I need to do.

Well, I’m afraid that none of these endpoints are able to do what I need them to do. I’ve figured out what the payload should look like for both, I’m putting it here in case it helps anyone in the future (the payload is just an example).

/asset/:asset_id/attribute/:attribute_id
payload:
"on"

/asset/attributes
payload:

[
  {
    "ref": {
      "id": “3tun9ET0X2pz”,
      "name": "estat’
    },
    "value": "connectat"
  },
  {
    "ref": {
      "id": “3tun9ET0X2pz”,
      "name": "value"
    },
    "value": 25
  }
]

That is, I can only send one value at a time for the same attribute, and not even the timestamp.
It would be nice to have an endpoint like /asset/predicted/{assetId}/{attributeName} applied to “real” datapoints.

Yes it’s currently not possible to batch import (historic/future) values into asset attributes.

Is there a specific reason why you cannot use /asset/predicted/{assetId}/{attributeName} and want to create a separate attribute for these forecasts?

We’re using successfully /asset/predicted/{assetId}/{attributeName} for almost 4 months. The problem is that the forecastings disappear from the database with each new day (table asset_predicted_datapoint). I think there is no way to avoid that.
We need to have a history of the real data and the predictions made at the time in order to calculate errors, KPIs, make new models, etc. Or even to compare visually the real and forecasted values.
So we’re currently still using the /predicted endpoint to send predictions, and the idea was to store those same predictions in a persistent variable.
I guess I can only work directly with Postgresql, being very careful with the timestamp format.

Storing predicted data in the assets isn’t a good idea as performance will be affected.

We can look to make the predicted data purging configurable so you could keep old predictions.

I guess that would solve your issue?