# Discrete Derivative Revisited: how to get dt

In the last post, I calculated the discrete derivative of a counter variable's measurements, and I took the short cut by dividing the value's difference — that is, `dx` — simply by *300* instead of calculating the correct value of `dt`. This obviously needs to be fixed.

So, the last version of our statement looked like this:

select energeh.check_id, energeh.measured_at, energeh.etoday, lag(energeh.etoday, 1) over (partition by date_trunc( 'day', measured_at ) order by measured_at) as etoday_lag, (energeh.etoday - lag(energeh.etoday, 1) over (partition by date_trunc( 'day', measured_at ) order by measured_at)) / 300 as etoday_diff, energeh.etotal, lag(energeh.etotal, 1) over (order by measured_at) as etotal_lag, (energeh.etotal - lag(energeh.etotal, 1) over (order by measured_at)) / 300 as etotal_diff from ( select cm.check_id, cm.measured_at, avg(cm.value) filter (where sv.name = 'etoday') as etoday, avg(cm.value) filter (where sv.name = 'etotal') as etotal from monitoring_checkmeasurement cm inner join monitoring_sensorvariable sv on variable_id=sv.id where cm.check_id=641 group by cm.check_id, cm.measured_at order by cm.measured_at ) as energeh;

Getting the difference for the measured_at column works just like for all the other columns, by using the `lag` function and substracting its result from the current `measured_at` column:

select ..., energeh.measured_at - lag(energeh.measured_at, 1) over (order by measured_at) as measured_at_diff, ...

The full query:

select energeh.check_id, energeh.measured_at, energeh.measured_at - lag(energeh.measured_at, 1) over (order by measured_at) as measured_at_diff, energeh.etoday, lag(energeh.etoday, 1) over (partition by date_trunc( 'day', measured_at ) order by measured_at) as etoday_lag, (energeh.etoday - lag(energeh.etoday, 1) over (partition by date_trunc( 'day', measured_at ) order by measured_at)) / 300 as etoday_diff, energeh.etotal, lag(energeh.etotal, 1) over (order by measured_at) as etotal_lag, (energeh.etotal - lag(energeh.etotal, 1) over (order by measured_at)) / 300 as etotal_diff from ( select cm.check_id, cm.measured_at, avg(cm.value) filter (where sv.name = 'etoday') as etoday, avg(cm.value) filter (where sv.name = 'etotal') as etotal from monitoring_checkmeasurement cm inner join monitoring_sensorvariable sv on variable_id=sv.id where cm.check_id=641 group by cm.check_id, cm.measured_at order by cm.measured_at ) as energeh;

Which returns a result such as this (some columns snipped):

check_id | measured_at | measured_at_diff ----------+------------------------+------------------- 641 | 2015-05-17 02:00:00+02 | 641 | 2015-05-18 02:00:00+02 | 1 day 641 | 2015-05-19 02:00:00+02 | 1 day 641 | 2015-08-14 02:00:00+02 | 1 day 641 | 2015-08-15 02:00:00+02 | 1 day 641 | 2015-08-15 18:55:00+02 | 16:55:00 641 | 2015-08-15 19:00:00+02 | 00:05:00 641 | 2015-08-15 19:05:00+02 | 00:05:00 641 | 2015-08-15 19:10:00+02 | 00:05:00 641 | 2015-08-15 19:15:00+02 | 00:05:00 641 | 2015-08-15 19:20:00+02 | 00:05:00

This is a pretty good start, because it gives us the correct values. Unfortunately, if we now replace the hardcoded `/ 300` in the above statement with the same definition, we get an error:

ERROR: operator does not exist: double precision / interval LINE 7: ...runc( 'day', measured_at ) order by measured_at)) / (energeh... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.

So, we need to convert the *measured_at_diff* column to something that can be divided correctly, for instance, a float holding the number of seconds in that interval. Luckily,
the EXTRACT function can do that for us:

select ..., EXTRACT( EPOCH FROM energeh.measured_at - lag(energeh.measured_at, 1) over (order by measured_at) ) as measured_at_diff, ...

Which yields:

check_id | measured_at | measured_at_diff ----------+------------------------+------------------ 641 | 2015-05-17 02:00:00+02 | 641 | 2015-05-20 02:00:00+02 | 86400 641 | 2015-08-14 02:00:00+02 | 86400 641 | 2015-08-15 02:00:00+02 | 86400 641 | 2015-08-15 18:55:00+02 | 60900 641 | 2015-08-15 19:00:00+02 | 300 641 | 2015-08-15 21:10:00+02 | 300 641 | 2015-08-15 21:15:00+02 | 300 641 | 2015-08-16 02:00:00+02 | 17100 641 | 2015-08-16 06:25:00+02 | 15900 641 | 2015-08-16 06:30:00+02 | 300 641 | 2015-08-18 19:25:02+02 | 300 641 | 2015-08-18 19:30:03+02 | 301 641 | 2015-08-18 19:35:02+02 | 299 641 | 2015-08-18 19:40:02+02 | 300

The irregularity of these values also explains why this is even interesting. We can't just hope that the interval will always be correct, because sometimes it just won't be, and there's even a good reason for it: At night, the solar inverter doesn't deliver any results, because it goes to sleep when the sun is down.

Anyway, now that we have the interval represented as a number of seconds, we can finally use it in the other parts of our statement:

select energeh.check_id, energeh.measured_at, EXTRACT( EPOCH FROM (energeh.measured_at - lag(energeh.measured_at, 1) over (order by measured_at))) as measured_at_diff, energeh.etoday, lag(energeh.etoday, 1) over (partition by date_trunc( 'day', measured_at ) order by measured_at) as etoday_lag, (energeh.etoday - lag(energeh.etoday, 1) over (partition by date_trunc( 'day', measured_at ) order by measured_at)) / EXTRACT( EPOCH FROM (energeh.measured_at - lag(energeh.measured_at, 1) over (order by measured_at))) as etoday_diff, energeh.etotal, lag(energeh.etotal, 1) over (order by measured_at) as etotal_lag, (energeh.etotal - lag(energeh.etotal, 1) over (order by measured_at)) / EXTRACT( EPOCH FROM (energeh.measured_at - lag(energeh.measured_at, 1) over (order by measured_at))) as etotal_diff from ( select cm.check_id, cm.measured_at, avg(cm.value) filter (where sv.name = 'etoday') as etoday, avg(cm.value) filter (where sv.name = 'etotal') as etotal from monitoring_checkmeasurement cm inner join monitoring_sensorvariable sv on variable_id=sv.id where cm.check_id=641 group by cm.check_id, cm.measured_at order by cm.measured_at ) as energeh;

The sheer length and copy-pasta of it all is starting to get to me, though...