Skip to main content

Calendar and time zone

Learn how to use Cognite Data Fusion in a local time zone, with queries that span over calendar entities, like months.

Scope

Calendar and time zones applies only to aggregate queries for time series data points, and affects only the intervals we aggregate over. The data points are ingested, stored, and returned in UTC time.

We support all time zones in the common IANA time zone database, including time zones with daylight saving time (DST), like Europe/Oslo or America/New_York. We also support time zones with half-hour offsets, like Asia/Kolkata (UTC+05:30) or 15 minute offsets, like Asia/Kathmandu (UTC+05:45). You can also enter a custom time zone, like UTC+02:30. If nothing is specified, we default to UTC.

Performance

Queries over time zones with non-zero minute offsets may be slower than time zone queries with whole hour offsets. If queries start to fail, try to use a smaller time range or use a lower limit.

Aggregate granularities

Calendars can be used with three granularities, hour, day, and month.

Month

Month is a new granularity, denoted by month or mo (not to be confused with m for minute). The month starts on the first day in the month, at 00:00 in the given time zone, and ends ( exclusively) on the first day of the next month.

Month aggregates take DST transitions into account, which means that, for instance, the length of March and October may vary, depending on the time zone.

Just like other aggregates, you can prefix the granularity with a number, like 3month or 12mo. This way, you can retrieve aggregates for a quarter or a year. The offset is determined by the start time. For example, if you query for 3month starting at 2022-02-01 (in the given time zone), you will get the aggregates for February through April, May through July, August through October, and November through January.

Day

Day is the same as before, but now you can specify a time zone. The day starts at 00:00 in the given time zone, and ends (exclusive) at 00:00 the next day.

Day aggregates will take DST transitions into account, in which case the length of the day can be more or less than 24 hours.

Hour

Hour aggregates will in general be the same as before, the only relevant change is when the time zone has an offset that is not a whole hour, like Asia/Kolkata. In this case, we will round the start time down to the nearest whole hour in the given time zone.

Hour aggregates do not take regular DST transitions into account. 24h is, in general, always 24 hours, even if the day is 23 or 25 hours long.

Synthetic time series

In synthetic time series, time zones and calendar queries are analogous to regular aggregate queries.

We support the exact same granularities and time zones.

The time zone is provided for the query as a whole, while the granularities are specified for each time series in the query. The alignment of each time series is rounded down to the nearest whole granularity unit in the given time zone. For instance, 7d will align to local midnight of the day of the alignment timestamp.

Alignment

The default alignment is epoch, or January 1st 1970 UTC. If you use a time zone with a negative offset, like America/Chicago (UTC-05:00), epoch corresponds to December 31st 1969 local time, which for month aggregates will round down to December 1st 1969.

If this is not the intended behavior, please specify the correct alignment in the expression.

Special cases

Unsupported time ranges

We only support queries with offsets that are multiple of 15 minutes, not like UTC+05:21:10 (Asia/Kolkata until 1905). Such unaligned offsets were more common at the start of the 20th century, and were abolished after 1980. Use a later start time, or use a different time zone, for instance fixed UTC+05:30.

There are some special cases where 24h is not 24 hours, during transitions that are not a whole hour.

If there are two midnights on a given day (DST transition from 01:00 to 00:00), we will use the first midnight as the dividing point.

Example queries

Retrieve data points

    POST /api/v1/projects/{project}/timeseries/data/list
Content-Type: application/json

{
"items": [
{
"limit": 100,
"externalId": "your external id",
"aggregates": ["count"],
"granularity": "3mo",
"timeZone": "Australia/Adelaide",
"start": 1580500000000
}
]
}

Response:
{
"items": [
{
"id": 123,
"externalId": "your external id",
"isString": false,
"isStep": false,
"datapoints": [
{
"timestamp": 1580477400000,
"count": 2
},
{
"timestamp": 1588257000000,
"count": 1
},
{
"timestamp": 1604151000000,
"count": 5
}
]
}
]
}

The start time, 1580500000000, corresponds to Jan 31 2020 19:46:40 UTC. In Australia/Adelaide, however, the local time is Feb 01 2020 06:16:40, in other words, in February.

The timestamp of the returned data points, 1580477400000, 1588257000000 and 1604151000000 corresponds to local timestamps Feb 01 2020 00:00:00, May 01 2020 00:00:00, and Nov 01 2020 00:00:00, respectively. In this example, we assume the aggregate from August through October was empty and omitted.

Note that the UTC timestamps in the response vary between 13:30 and 14:30, depending on the local time zone.

    POST /api/v1/projects/{project}/timeseries/data/list
Content-Type: application/json

{
"items": [
{
"externalId": "your external id",
"granularity": "1day"
},
{
"externalId": "your external id",
"granularity": "6h"
}
],
"start": 1582144200000,
"aggregates": ["count"],
"timeZone": "UTC+01:00",
"limit": 1
}

Response:
{
"items": [
{
"id": 123,
"externalId": "your external id",
"isString": false,
"isStep": false,
"datapoints": [
{
"timestamp": 1582066800000,
"count": 2
}
]
},
{
"id": 123,
"externalId": "your external id",
"isString": false,
"isStep": false,
"datapoints": [
{
"timestamp": 1582142400000,
"count": 5
}
]
}
]
}

In this example, we use a common start, aggregates, and timeZone for all queries.

The start time, 1582144200000, corresponds to Feb 19 2020 21:30:00 UTC+01:00. For the day aggregate, we round down to the start of the day, for the 6h aggregate, we round down to the nearest hour, or 21:00 UTC+01:00.

Note that the count may be higher for the 6h aggregate, as it may include data points from the next day.

Retrieve synthetic data points

    POST /api/v1/projects/{project}/timeseries/synthetic/query
Content-Type: application/json
{
"items": [
{
"expression": "TS{id:123, aggregate:'stepinterpolation', granularity:'3mo', alignment:0} + TS{id:234}",
"timeZone": "Europe/Oslo"
}
]
}

Response:
{
"items": [
{
"datapoints": [
{
"timestamp":7772400000,
"value":15.0
}
,
{
"timestamp":13478400000,
"value":16.0
}
,
{
"timestamp":15634800000,
"value":21.0
}
,
{
"timestamp":20003696000,
"value":22.0
}
],
"isString":false
}
]
}

In this synthetic example, we see that some of the datapoint timestamps are aligned to start of month in Europe/Oslo, due to the 3mo granularity. Those data points that correspond to timestamps in TS:234 are not aligned at all.