2

I have a table with data on user applications. It contains several fields: employee_id text, date date, app_info jsonb.

+-------------+------------+------------+
| employee_id | date | app_info |
+-------------+------------+------------+
| 2223eb0f0d0x| 01/07/2025 | jsonb |
| 2223eb0f0d0x| 01/08/2025 | jsonb |
| 2223eb0f0d0x| 31/07/2025 | jsonb |
| 2223eb0f0d0x| 31/08/2025 | jsonb |
+-------------+------------+------------+

The info field app_info contains the following code:

[
 {
 "apps": [
 {
 "grade": "UNPRODUCTIVE",
 "app_name": "Google chrome",
 "type_name": "Native",
 "domain_site": "http://vk.com",
 "count_seconds": 57.731
 }
 ],
 "granula_end": "17:55:00",
 "granula_start": "17:50:00"
 },
 {
 "apps": [
 {
 "grade": "UNPRODUCTIVE",
 "app_name": "Google chrome",
 "type_name": "Native",
 "domain_site": "http://vk.com",
 "count_seconds": 217.879
 },
 {
 "grade": "PRODUCTIVE",
 "app_name": "Windows Explorer",
 "type_name": "Native",
 "domain_site": "",
 "count_seconds": 3.174
 }
 ],
 "granula_end": "19:25:00",
 "granula_start": "19:20:00"
 },
 {
 "apps": [
 {
 "grade": "NEUTRAL",
 "app_name": "Time Doctor 2",
 "type_name": "Native",
 "domain_site": "",
 "count_seconds": 118.299
 }
 ],
 "granula_end": "19:30:00",
 "granula_start": "19:25:00"
 },
 {
 "apps": [
 {
 "grade": "NEUTRAL",
 "app_name": "Time Doctor 2",
 "type_name": "Native",
 "domain_site": "",
 "count_seconds": 29.992
 },
 {
 "grade": "PRODUCTIVE",
 "app_name": "Windows Explorer",
 "type_name": "Native",
 "domain_site": "",
 "count_seconds": 3.002
 }
 ],
 "granula_end": "19:55:00",
 "granula_start": "19:50:00"
 }
]

I have the following task:

For a given date range, calculate the number of minutes in each application with an interval of 15 minutes. That is, I need to combine the existing 5-minute granules into 15-minute granules and calculate the amount of time for all applications in this granule for each granule. Group the list of applications itself so that there are no duplicates.

I tried to do this, but I can’t figure out how to write the query correctly.

And I need a second query that finds the largest granule by application runtime. That is, where the greatest amount of work is in different applications. I don’t understand how to write this query at all...

Here is a link to my work - https://dbfiddle.uk/q88VfV3l

P.S. I looked through different questions on the forum, but none of them helped me solve the problem, so I posted my question.

P.S. I used your answers and am trying to create a new query based on the table provided in the question.

I need to calculate the total activity time for all applications in the granule for each 5-minute granule (these granules are already in the table).

Find the application that has the largest number of seconds, and I need to display the application name and productivity rating. Sort all applications in the granule by time and productivity rating.

Merge identical granules. Granules in which the most used application is the same are considered identical. After merging, get an extended granule with a count of the number of seconds. For example, we have granules:

[
 {
 "apps": [
 {
 "grade": "UNPRODUCTIVE",
 "app_name": "Google chrome",
 "type_name": "Native",
 "domain_site": "http://vk.com",
 "count_seconds": 57.731
 }
 ],
 "granula_end": "16:55:00",
 "granula_start": "16:50:00"
 },
 {
 "apps": [
 {
 "grade": "UNPRODUCTIVE",
 "app_name": "Google chrome",
 "type_name": "Native",
 "domain_site": "http://vk.com",
 "count_seconds": 217.879
 },
 {
 "grade": "PRODUCTIVE",
 "app_name": "Windows Explorer",
 "type_name": "Native",
 "domain_site": "",
 "count_seconds": 3.174
 }
 ],
 "granula_end": "16:55:00",
 "granula_start": "17:00:00"
 }]

You can see that these two granules have the same most used application. And the granules are adjacent to each other in time! These are the granules that need to be merged into one:

{
 "apps": [
 {
 "grade": "UNPRODUCTIVE",
 "app_name": "Google chrome",
 "type_name": "Native",
 "domain_site": "http://vk.com",
 "count_seconds": 275.61
 },
 {
 "grade": "PRODUCTIVE",
 "app_name": "Windows Explorer",
 "type_name": "Native",
 "domain_site": "",
 "count_seconds": 3.174
 }
 ],
 "granula_end": "16:50:00",
 "granula_start": "17:00:00"
 }

Only adjacent granules need to be merged. And the granula_start and granula_end fields are changed into one granule per 10 minutes. This merging rule applies to all granules, both lower and higher in time. I understand that I will need to use recursion here, but I can't write such a query.

I also couldn't select not just the maximum number of seconds in the array of applications, but immediately an object that contains the maximum number of seconds with information about the application name and productivity rating.

There is also a rule for sorting by productivity ratings: unproductive, productive, no rating, neutral, inaction. And you need to sort all the applications in each granule according to this order. But first we sort by time, and then by rating. As a result, I should get a query like this:

{
 "userInfo": {
 "employeeId": "2223eb0f0d0c4941a16e83dc7274771b",
 },
 "granules": [
 {
 "dateTimeStart": "2025年07月08日T12:30",
 "dateTimeEnd": "2025年07月08日T12:35",
 "highestTypeActivity": "PRODUCTIVE",
 "activeTime": 210,
 "apps:" [
 {
 "name": "telegram.exe",
 "activeTime": 140,
 "grade": "PRODUCTIVE"
 },
 {
 "name": "notepad.exe",
 "activeTime": 70,
 "grade": "PRODUCTIVE"
 }
 ]
 },
 {
 "dateTimeStart": "2025年07月08日T12:35",
 "dateTimeEnd": "2025年07月08日T12:40",
 "activeTime": 210,
 "apps:" [
 {
 "name": "google chrome",
 "details": "http://vk.com",
 "activeTime": 140,
 "grade": "UNPRODUCTIVE"
 },
 {
 "name": "google chrome",
 "details": "http://mail.ru",
 "activeTime": 70,
 "grade": "PRODUCTIVE"
 }
 ]
 },
 ]
 
 }

I am update table in https://dbfiddle.uk/K5MnXSjx

halfer
20.2k20 gold badges111 silver badges208 bronze badges
2
  • Show the desired query result. In question, it is unclear whether for "all applications" or for "each application". Commented Sep 2, 2025 at 18:25
  • (I initially incorrectly tagged your question as a duplicate of postgresql complex grouping within json_build_object + sum while it is in fact a followup of it: your starting dataset seems to be the results of the other question) Commented Sep 2, 2025 at 20:12

2 Answers 2

1
  1. The result should be based on a table with granules 15 minutes long in the interval specified by the parameters.
generate_series('2025-09-02 14:00:00'::timestamp,'2025-09-02 18:00:00'::timestamp,'15 minute'::interval) grn15
  1. Expand source JSONB
    2.1 Extract "apps" array from "app_info". Use LATERAL JOIN to use app_info column from main table. 2.2 Extract granula_start and granula_end for 5 minute intervals
cross join lateral jsonb_array_elements(app_info) a
cross join lateral jsonb_array_elements((a->>'apps')::jsonb) b
  1. Exact granula_start is app_date+granula_start
 (date||' '||(a->>'granula_start'))::timestamp granula_start
  1. Join rows to generated_series output on granula_start
  2. Aggregate data

See example

select grn15
 ,count(app_name) app_count
 ,sum(count_seconds::float) seconds_count
from(
select *
from generate_series('2025-09-02 14:00:00'::timestamp,'2025-09-02 18:00:00'::timestamp,'15 minute'::interval) grn15
left join(
select employee_id meid, date -- ,app_info
 ,(date||' '||(a->>'granula_start'))::timestamp granula_start
 ,(date||' '||(a->>'granula_end'))::timestamp granula_end
 ,b.value->>'date' app_date
 ,b.value->>'app_name' app_name
 ,b.value->>'type_name' type_name
 ,b.value->>'domain_site' domain_site
 ,b.value->>'employee_id' app_employee_id
 ,b.value->>'count_seconds' count_seconds
 ,b.value app_data
from my_temp
cross join lateral jsonb_array_elements(app_info) a
cross join lateral jsonb_array_elements((a->>'apps')::jsonb) b
)q1 on q1.granula_start::timestamp>=grn15 and q1.granula_end::timestamp< (grn15+'15 minute'::interval)
)q2
group by grn15
order by grn15 -- ,granula_start
grn15 app_count seconds_count
2025年09月02日 14:00:00 0 null
2025年09月02日 14:15:00 0 null
2025年09月02日 14:30:00 0 null
2025年09月02日 14:45:00 4 1371.9560000000001
2025年09月02日 15:00:00 0 null
2025年09月02日 15:15:00 0 null
2025年09月02日 15:30:00 0 null
2025年09月02日 15:45:00 0 null
2025年09月02日 16:00:00 0 null
2025年09月02日 16:15:00 0 null
2025年09月02日 16:30:00 0 null
2025年09月02日 16:45:00 0 null
2025年09月02日 17:00:00 0 null
2025年09月02日 17:15:00 0 null
2025年09月02日 17:30:00 2 111.471
2025年09月02日 17:45:00 2 13.51
2025年09月02日 18:00:00 0 null

Expand JSONB column to table.

select grn15, meid, date, granula_start, granula_end, count_seconds, app_date, app_name
from generate_series('2025-09-02 14:00:00'::timestamp,'2025-09-02 18:00:00'::timestamp,'15 minute'::interval) grn15
left join(
select employee_id meid, date -- ,app_info
 ,(date||' '||(a->>'granula_start'))::timestamp granula_start
 ,(date||' '||(a->>'granula_end'))::timestamp granula_end
 ,b.value->>'date' app_date
 ,b.value->>'app_name' app_name
 ,b.value->>'type_name' type_name
 ,b.value->>'domain_site' domain_site
 ,b.value->>'employee_id' app_employee_id
 ,b.value->>'count_seconds' count_seconds
 ,b.value app_data
from my_temp
cross join lateral jsonb_array_elements(app_info) a
cross join lateral jsonb_array_elements((a->>'apps')::jsonb) b
)q1 on q1.granula_start::timestamp>=grn15 and q1.granula_end::timestamp< (grn15+'15 minute'::interval)
order by grn15,granula_start
grn15 meid date granula_start granula_end count_seconds app_date app_name
2025年09月02日 14:00:00 null null null null null null null
2025年09月02日 14:15:00 null null null null null null null
2025年09月02日 14:30:00 null null null null null null null
2025年09月02日 14:45:00 1 2025年09月02日 2025年09月02日 14:45:00 2025年09月02日 14:50:00 155.334 2025年07月31日 Postman
2025年09月02日 14:45:00 1 2025年09月02日 2025年09月02日 14:50:00 2025年09月02日 14:55:00 282.565 2025年07月31日 CLion
2025年09月02日 14:45:00 1 2025年09月02日 2025年09月02日 14:50:00 2025年09月02日 14:55:00 459.091 2025年07月31日 WinSCP: SFTP, FTP, WebDAV, S3 and SCP client
2025年09月02日 14:45:00 1 2025年09月02日 2025年09月02日 14:50:00 2025年09月02日 14:55:00 474.966 2025年07月31日 Explorer
2025年09月02日 15:00:00 null null null null null null null
2025年09月02日 15:15:00 null null null null null null null
2025年09月02日 15:30:00 null null null null null null null
2025年09月02日 15:45:00 null null null null null null null
2025年09月02日 16:00:00 null null null null null null null
2025年09月02日 16:15:00 null null null null null null null
2025年09月02日 16:30:00 null null null null null null null
2025年09月02日 16:45:00 null null null null null null null
2025年09月02日 17:00:00 null null null null null null null
2025年09月02日 17:15:00 null null null null null null null
2025年09月02日 17:30:00 1 2025年09月02日 2025年09月02日 17:35:00 2025年09月02日 17:40:00 66.195 2025年07月31日 CLion
2025年09月02日 17:30:00 1 2025年09月02日 2025年09月02日 17:35:00 2025年09月02日 17:40:00 45.276 2025年07月31日 Postman
2025年09月02日 17:45:00 1 2025年09月02日 2025年09月02日 17:50:00 2025年09月02日 17:55:00 3.86 2025年07月31日 CLion
2025年09月02日 17:45:00 1 2025年09月02日 2025年09月02日 17:50:00 2025年09月02日 17:55:00 9.65 2025年07月31日 Postman
2025年09月02日 18:00:00 null null null null null null null
select employee_id meid, date -- ,app_info
 ,(date||' '||(a->>'granula_start'))::timestamp granula_start
 ,(date||' '||(a->>'granula_end'))::timestamp granula_end
-- ,b.value->>'date' app_date
 ,b.value->>'app_name' app_name
-- ,b.value->>'type_name' type_name
-- ,b.value->>'domain_site' domain_site
-- ,b.value->>'employee_id' app_employee_id
 ,b.value->>'count_seconds' count_seconds
 ,jsonb_pretty(b.value) app_data
from my_temp
cross join lateral jsonb_array_elements(app_info) a
cross join lateral jsonb_array_elements((a->>'apps')::jsonb) b
order by meid,date, granula_start
meid date granula_start granula_end app_name count_seconds app_data
1 2025年09月02日 2025年09月02日 14:45:00 2025年09月02日 14:50:00 Postman 155.334 {
"date": "2025年07月31日",
"app_name": "Postman",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 155.334
}
1 2025年09月02日 2025年09月02日 14:50:00 2025年09月02日 14:55:00 CLion 282.565 {
"date": "2025年07月31日",
"app_name": "CLion",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 282.565
}
1 2025年09月02日 2025年09月02日 14:50:00 2025年09月02日 14:55:00 WinSCP: SFTP, FTP, WebDAV, S3 and SCP client 459.091 {
"date": "2025年07月31日",
"app_name": "WinSCP: SFTP, FTP, WebDAV, S3 and SCP client",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 459.091
}
1 2025年09月02日 2025年09月02日 14:50:00 2025年09月02日 14:55:00 Explorer 474.966 {
"date": "2025年07月31日",
"app_name": "Explorer",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 474.966
}
1 2025年09月02日 2025年09月02日 16:10:00 2025年09月02日 16:15:00 CLion 243.496 {
"date": "2025年07月31日",
"app_name": "CLion",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 243.496
}
1 2025年09月02日 2025年09月02日 16:10:00 2025年09月02日 16:15:00 Postman 319.902 {
"date": "2025年07月31日",
"app_name": "Postman",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 319.902
}
1 2025年09月02日 2025年09月02日 16:55:00 2025年09月02日 17:00:00 CLion 192.897 {
"date": "2025年07月31日",
"app_name": "CLion",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 192.897
}
1 2025年09月02日 2025年09月02日 16:55:00 2025年09月02日 17:00:00 Postman 111.307 {
"date": "2025年07月31日",
"app_name": "Postman",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 111.307
}
1 2025年09月02日 2025年09月02日 16:55:00 2025年09月02日 17:00:00 Explorer 144.277 {
"date": "2025年07月31日",
"app_name": "Explorer",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 144.277
}
1 2025年09月02日 2025年09月02日 17:35:00 2025年09月02日 17:40:00 CLion 66.195 {
"date": "2025年07月31日",
"app_name": "CLion",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 66.195
}
1 2025年09月02日 2025年09月02日 17:35:00 2025年09月02日 17:40:00 Postman 45.276 {
"date": "2025年07月31日",
"app_name": "Postman",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 45.276
}
1 2025年09月02日 2025年09月02日 17:50:00 2025年09月02日 17:55:00 CLion 3.86 {
"date": "2025年07月31日",
"app_name": "CLion",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 3.86
}
1 2025年09月02日 2025年09月02日 17:50:00 2025年09月02日 17:55:00 Postman 9.65 {
"date": "2025年07月31日",
"app_name": "Postman",
"type_name": "Native",
"domain_site": "",
"employee_id": "2223eb0f0d0c4941a16e83dc7274771b",
"count_seconds": 9.65
}

fiddle
and

answered Sep 2, 2025 at 19:13
Sign up to request clarification or add additional context in comments.

1 Comment

Wondering why you had null resulting app_count for 16:00 - 16:15, I found that you have to change granula_end to granula_start in q1.granula_start::timestamp>=grn15 and q1.granula_end::timestamp< (grn15+'15 minute'::interval) (or replace < by <=); currently, the granule 16:10 - 16:15 gets discarded because granula_end< (grn15+'15 minute') translates to 16:15 < 16:15, which is false, while 16:10 < 16:15 or 16:15 <= 16:15 would work. Fixing 2 occurrences in your fiddle gives this one.
1

I understand that you want to:

  1. unpack the 5 minutes jsonb created by this previous question,
  2. regroup them by 15 minutes slices (cumulating count_seconds for applications that appear more than once over the three 5 minutes granules of the 15 minute "big granule"), and
  3. then determine the most active quarter of an hour (for a given user and day?).

As all facilities for 2. exist in SQL (group by, sum, date_bin), we will handle 1. by first unwrapping our jsonb columns to full SQL records (as part of a CTE) with all columns present for grouping and suming.
Finally 3. will simply be done with the row_number() window function, to get each granule's position relatively to other granules of the same day and user, ordered by decreasing count_seconds (thus all granules with row_number() 1 are the most active of their user and day, having the biggest count_seconds).

with
 -- Start by renormalizing each jsonb back to a "granule" CTE:
 granule as
 (
 select
 employee_id, date,
 date_bin('15 minutes', date + (granule->>'granula_start')::time, date) biggranula, -- Here prepare our grouping column: granula_start truncated to the preceding 15 mn alignment.
 granule->'granula_start', granule->'granula_end',
 app->'app_name' app_name, app->'type_name' type_name, app->'domain_site' domain_site,
 (app->'count_seconds')::decimal count_seconds
 from my_temp t
 cross join lateral jsonb_array_elements(app_info) granule -- pop each t.app_info (a jsonb array) into its own row
 cross join lateral jsonb_array_elements(granule->'apps') app -- pop each granule->'apps' (itself a jsonb array) into its own row
 where jsonb_typeof(granule->'apps') = 'array' -- /!\ This filter avoids the just above jsonb_array_elements(granule->'apps') to crash on nulls with "cannot extract elements from a scalar". However it looks fragile, because the optimizer could choose to jsonb_array_elements() _before_ this json_typeof() filter.
 ),
 -- Now, inside each granule, group by app ("Group the list of applications itself so that there are no duplicates"):
 biggranule as
 (
 select
 employee_id, date, biggranula, app_name, type_name, domain_site,
 sum(count_seconds) count_seconds
 from granule
 group by 1,2,3,4,5,6
 ),
--select * from biggranule;
 -- Now we can wrap back our grouped biggranules to JSONB:
 biggranules as
 (
 select
 employee_id, date, biggranula,
 sum(count_seconds) count_seconds,
 json_agg((select r from (select app_name, type_name, domain_site, count_seconds) r) order by count_seconds desc)
 from biggranule
 group by 1, 2, 3
 )
-- Display;
-- and add a most_used_rank to answer to "a second query that finds the largest granule by application runtime":
select
 *,
 row_number() over (partition by employee_id, date order by count_seconds desc) most_used_rank
from biggranules
order by employee_id, date, biggranula;
employee_id date biggranula count_seconds json_agg most_used_rank
1 2025年08月01日 2025年08月01日 20:15:00 75.416000 [{"app_name":"Windows Explorer","type_name":"Native","domain_site":"","count_seconds":57.337000},
{"app_name":"Microsoft OneDrive","type_name":"Native","domain_site":"","count_seconds":18.079000}]
1
2223eb0f0d0c4941a16e83dc7274771b 2025年07月31日 2025年07月31日 14:45:00 1371.956000 [{"app_name":"Проводник","type_name":"Native","domain_site":"","count_seconds":474.966000},
{"app_name":"WinSCP: SFTP, FTP, WebDAV, S3 and SCP client","type_name":"Native","domain_site":"","count_seconds":459.091000},
{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":282.565000},
{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":155.334000}]
1
2223eb0f0d0c4941a16e83dc7274771b 2025年07月31日 2025年07月31日 16:00:00 563.398000 [{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":319.902000},
{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":243.496000}]
2
2223eb0f0d0c4941a16e83dc7274771b 2025年07月31日 2025年07月31日 16:45:00 448.481000 [{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":192.897000},
{"app_name":"Проводник","type_name":"Native","domain_site":"","count_seconds":144.277000},
{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":111.307000}]
3
2223eb0f0d0c4941a16e83dc7274771b 2025年07月31日 2025年07月31日 17:30:00 111.471000 [{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":66.195000},
{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":45.276000}]
4
2223eb0f0d0c4941a16e83dc7274771b 2025年07月31日 2025年07月31日 17:45:00 13.510000 [{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":9.650000},
{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":3.860000}]
5
2223eb0f0d0c4941a16e83dc7274771b 2025年08月01日 2025年08月01日 14:00:00 228.741000 [{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":216.263000},
{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":12.478000}]
1
2223eb0f0d0c4941a16e83dc7274771b 2025年08月06日 2025年08月06日 12:15:00 234.203000 [{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":135.358000},
{"app_name":"DataGrip","type_name":"Native","domain_site":"","count_seconds":98.845000}]
1
2223eb0f0d0c4941a16e83dc7274771b 2025年08月07日 2025年08月07日 10:30:00 430.818000 [{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":223.109000},
{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":205.182000},
{"app_name":"File Picker UI Host","type_name":"Native","domain_site":"","count_seconds":2.527000}]
1
2223eb0f0d0c4941a16e83dc7274771b 2025年08月07日 2025年08月07日 10:45:00 259.270000 [{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":120.363000},
{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":63.953000},
{"app_name":"DataGrip","type_name":"Native","domain_site":"","count_seconds":53.974000},
{"app_name":"Brave Browser","type_name":"Native","domain_site":"","count_seconds":18.028000},
{"app_name":"Проводник","type_name":"Native","domain_site":"","count_seconds":2.952000}]
2
2223eb0f0d0c4941a16e83dc7274771b 2025年08月07日 2025年08月07日 11:00:00 75.119000 [{"app_name":"Postman","type_name":"Native","domain_site":"","count_seconds":41.887000},
{"app_name":"CLion","type_name":"Native","domain_site":"","count_seconds":33.232000}]
5
2223eb0f0d0c4941a16e83dc7274771b 2025年08月07日 2025年08月07日 11:15:00 204.564000 [{"app_name":"Time Doctor 2","type_name":"Native","domain_site":"","count_seconds":123.377000},
{"app_name":"Windows Explorer","type_name":"Native","domain_site":"","count_seconds":81.187000}]
3
2223eb0f0d0c4941a16e83dc7274771b 2025年08月07日 2025年08月07日 11:30:00 90.203000 [{"app_name":"Windows Explorer","type_name":"Native","domain_site":"","count_seconds":90.203000}] 4

(as seen running in this db<>fiddle)

answered Sep 2, 2025 at 21:58

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.