Time Selection
Note
Understanding time windows in NQL - the critical concept that past 2d ≠ past 48h due to different data resolution and timezone handling.
Overview
Time selection is mandatory for all event table queries in NQL. But beyond being required syntax, time selection fundamentally affects:
- Data resolution - Daily aggregates vs 5-15 minute samples
- Timezone reference - Cloud instance timezone vs user's browser timezone
- Data availability - Retention limits vary by resolution
- Query performance - Shorter windows = faster queries
Critical Misconception
past 2d is NOT the same as past 48h - even though 2 days = 48 hours mathematically!
They return different data based on: - Different resolutions (daily vs hourly samples) - Different timezone references (cloud vs user) - Different precision (calendar days vs exact hours)
Basic Syntax
/* Daily resolution (low-res) - Cloud timezone */
<table> during past <N>d
/* Hourly/minute resolution (high-res) - User timezone */
<table> during past <N>h
<table> during past <N>min
Examples:
/* 7 days of daily-aggregated data */
execution.events during past 7d
/* 48 hours of high-resolution samples (5-15 min intervals) */
execution.events during past 48h
/* 30 minutes of very recent data */
execution.events during past 30min
Two Time Selection Types
Type 1: Daily Resolution (past Xd)
Syntax: during past 7d, during past 30d
Characteristics:
- Resolution: Daily aggregated data (one data point per day)
- Timezone: Cloud instance timezone (configured by admin)
- Precision: Full calendar days (00:00:00 to 23:59:59)
- Retention: Up to 30 days for most tables
- Best for: Trends, KPIs, weekly/monthly reports, executive dashboards
Example:
/* Crash trend over 30 days (daily aggregation) */
execution.crashes during past 30d
| summarize crash_count = count() by 1d
| sort start_time asc
/* Returns 30 data points (one per day) */
Type 2: Hourly/Minute Resolution (past Xh, past Xmin)
Syntax: during past 48h, during past 24h, during past 30min
Characteristics:
- Resolution: High-resolution samples (5-15 minute intervals)
- Timezone: User's browser timezone
- Precision: Exact hour/minute intervals from current time
- Retention: Up to 8 days for execution.events/connection.events, 30 days for others
- Best for: Incident investigation, troubleshooting, real-time monitoring
Example:
/* Detailed CPU usage over last 24 hours (hourly trend) */
execution.events during past 24h
| summarize avg_cpu = cpu_time.avg() by 1h
| sort start_time asc
/* Returns 24 data points (one per hour) with 5-15 min sample precision */
Why past 2d ≠ past 48h
Even though 2 days mathematically equals 48 hours, NQL treats them very differently.
Real-world scenario:
You're in CET (Central European Time) querying a Nexthink cloud instance in ET (Eastern Time) on Nov 11 at 11:26 AM CET.
Query 1: past 2d (Daily resolution, cloud timezone)
Time range returned:
- Cloud timezone (ET): Nov 10 00:00:00 – Nov 12 00:00:00 ET
- Your timezone (CET): Nov 10 06:00:00 – Nov 12 06:00:00 CET
Data: Daily aggregated samples (2 data points)
Query 2: past 48h (Hourly resolution, user timezone)
Time range returned:
- Cloud timezone (ET): Nov 9 06:00:00 – Nov 11 06:00:00 ET
- Your timezone (CET): Nov 9 12:00:00 – Nov 11 12:00:00 CET
Data: High-resolution samples taken every 5-15 minutes (192-576 samples)
Comparison Table
| Aspect | past 2d | past 48h |
|---|---|---|
| Data resolution | Daily aggregates | 5-15 min samples |
| Timezone | Cloud instance (ET) | User browser (CET) |
| Start time (ET) | Nov 10 00:00:00 | Nov 9 06:00:00 |
| Start time (CET) | Nov 10 06:00:00 | Nov 9 12:00:00 |
| Data points | ~2 | ~192-576 |
| Time span difference | 6 hour offset! | Different start time! |
Same Query Time, Different Data Ranges!
Queries run at the exact same moment return different time ranges and different data resolution.
Never mix past Xd and past Xh within the same investigation - you'll be comparing different time periods!
Data Retention Limits
Not all tables retain data equally. Retention varies by table and resolution.
execution.events & connection.events (Limited Retention)
These are the largest tables due to high sample volume:
| Resolution | Time Format | Max Retention | Example |
|---|---|---|---|
| High-res | past Xh, past Xmin | 8 days | past 48h ✅ past 15d ❌ |
| Low-res | past Xd | 30 days | past 7d ✅ past 60d ❌ |
Examples:
/* ✅ Works - within 8-day high-res limit */
execution.events during past 7d
/* ✅ Works - within 8-day high-res limit */
execution.events during past 48h
/* ❌ May fail - exceeds 8-day high-res retention */
execution.events during past 15d
/* ✅ Works - 30-day low-res retention */
execution.events during past 30d
Other Event Tables (Standard Retention)
Most other event tables have 30-day retention for both resolutions:
device_performance.eventsdevice_performance.bootsweb.eventsweb.page_viewsexecution.crashes
/* ✅ Both work - 30-day retention for both resolutions */
device_performance.events during past 30d
device_performance.events during past 48h
Object Tables (No Time Required)
Object tables represent current state and don't require time selection:
devicesusersapplicationsbinaries
When to Use Each Format
Use past Xd (Days) For:
Long-term trends and KPIs
/* 30-day crash trend for executive dashboard */
execution.crashes during past 30d
| summarize crash_count = count() by 1d, application.name
| sort start_time asc
Weekly/monthly reports
/* Weekly device performance averages */
devices during past 7d
| include device_performance.events
| compute avg_cpu = cpu_usage.avg()
| list device.name, avg_cpu
Baseline comparisons
/* Compare this week to last week */
execution.events during past 7d
| summarize avg_memory = real_memory.avg()
When you need data beyond 8 days
/* Must use daily resolution for 30-day queries */
execution.events during past 30d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by 1d
Use past Xh (Hours) For:
Incident investigation
/* Detailed analysis of recent crash spike */
execution.crashes during past 24h
| where application.name == "Outlook"
| summarize crash_count = count() by 1h
| sort start_time asc
Performance troubleshooting
/* Identify exact time of CPU spike */
execution.events during past 48h
| where device.name == "LAPTOP-001"
| summarize peak_cpu = cpu_time.avg() by 1h
| sort peak_cpu desc
Real-time monitoring
/* Last hour activity */
execution.events during past 1h
| where binary.name == "sensedlpprocessor.exe"
| summarize connections = number_of_established_connections.sum()
When precision matters
/* Exact timing of network issue (5-15 min precision) */
connection.events during past 6h
| where event.destination.domain == "service.company.com"
| summarize failure_rate = event.failed_connection_ratio.avg() by 15min
| sort start_time asc
Real-World Examples
Example: Comparing Daily vs Hourly Analysis
Scenario: Investigating Outlook crashes - should you use days or hours?
Daily view (trend analysis):
/* See crash pattern over time (KPI tracking) */
execution.crashes during past 30d
| where binary.binary.name == "outlook.exe"
| summarize crash_count = count() by 1d
| sort start_time asc
Hourly view (incident investigation):
Output: 48 data points showing hourly crash counts Use for: Finding root cause (crashes spike after 9 AM? Related to login?)Example: Timezone Impact on Results
Scenario: You're in London (GMT), cloud instance in US East (ET) - 5 hour difference.
Query run at 2:00 PM GMT on Nov 15:
Using past 1d (cloud timezone ET):
Using past 24h (user timezone GMT):
Lesson: Use past 24h for "last day from now", use past 1d for "yesterday (calendar day)"
Example: Retention Limit Error
Scenario: Query fails with "exceeded retention limit"
Failed query:
/* ❌ Exceeds 8-day high-res limit */
execution.events during past 15d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
Fix - Option 1: Reduce time window
/* ✅ Within 8-day limit */
execution.events during past 7d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
Fix - Option 2: Use daily resolution
Example: Development Testing Strategy
Scenario: Building a complex query - start small, expand gradually
Stage 1: Test with minimal data (fast)
execution.events during past 1h # Very short window
| where binary.name == "outlook.exe"
| limit 10
/* Validates filter works, runs in <1 second */
Stage 2: Add aggregations, keep short window
execution.events during past 6h # Expand slightly
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
| limit 10
/* Validates aggregation logic */
Stage 3: Expand to production time window
Time Selection Reference Table
| Time Format | Resolution | Timezone | Retention (exec/conn) | Retention (other) | Best For |
|---|---|---|---|---|---|
past 1d | Daily aggregate | Cloud | 30 days | 30 days | Yesterday's data |
past 7d | Daily aggregate | Cloud | 30 days | 30 days | Weekly trends |
past 30d | Daily aggregate | Cloud | 30 days | 30 days | Monthly KPIs |
past 1h | 5-15 min samples | User | 8 days | 30 days | Last hour activity |
past 6h | 5-15 min samples | User | 8 days | 30 days | Morning/afternoon period |
past 24h | 5-15 min samples | User | 8 days | 30 days | Recent troubleshooting |
past 48h | 5-15 min samples | User | 8 days | 30 days | Detailed investigation |
past 30min | 5-15 min samples | User | 8 days | 30 days | Real-time monitoring |
Common Patterns
Pattern: Daily Trend Analysis
/* Track metric over time (chronological chart) */
<event_table> during past 30d
| where <filter>
| summarize metric = field.avg() by 1d
| sort start_time asc
Pattern: Hourly Spike Investigation
/* Pinpoint exact timing of issue */
<event_table> during past 48h
| where <filter>
| summarize metric = field.max() by 1h
| sort metric desc
| limit 5
Pattern: Recent Real-Time Check
/* Check current state (last hour) */
<event_table> during past 1h
| where <filter>
| summarize current_value = field.last()
Pattern: Baseline Comparison
/* Compare this week to prior weeks */
<event_table> during past 7d
| where <filter>
| summarize avg_metric = field.avg()
Tips & Tricks
Start Small, Expand Gradually
During query development, always start with short time windows:
Use Consistent Time Format Within Investigation
Don't mix daily and hourly formats when comparing data:
/* ❌ Inconsistent - comparing different time periods! */
/* Query 1: */
execution.events during past 2d
/* Query 2: */
execution.events during past 48h
/* These return DIFFERENT data! */
/* ✅ Consistent - use same format */
execution.events during past 7d # Both use daily
execution.events during past 7d
Know Your Retention Limits
execution.events and connection.events:
- Daily (
past Xd): 30 days ✅ - Hourly (
past Xh): 8 days ⚠️
Other tables:
- Both formats: 30 days ✅
Time Selection Affects Performance
Shorter windows = faster queries:
/* Slow - 30 days of data */
execution.events during past 30d
| summarize count by binary.name
/* ~2-5 seconds */
/* Fast - 7 days of data */
execution.events during past 7d
| summarize count by binary.name
/* ~1-2 seconds */
/* Very fast - 1 day of data */
execution.events during past 1d
| summarize count by binary.name
/* <1 second */
Common Mistake: Assuming Days = Hours
/* ❌ WRONG assumption */
/* "I want the last 2 days, so I'll use past 48h" */
execution.events during past 48h
/* This uses user timezone, high-res data */
/* NOT the same as 2 calendar days! */
/* ✅ CORRECT - for 2 calendar days */
execution.events during past 2d
/* This uses cloud timezone, daily aggregates */
Common Mistake: Exceeding Retention Without Knowing
Timezone Gotcha: "Yesterday" Depends on Timezone
If you're in CET and cloud is ET (6h behind):
Performance Considerations
Time window is one of the biggest performance factors:
- Shorter windows = faster queries
past 1dis ~7x faster thanpast 7d-
past 7dis ~4x faster thanpast 30d -
Daily resolution faster than hourly for same time span
-
past 7d(7 samples) faster thanpast 168h(672-2016 samples) -
Combine with early filtering for best performance
-
Development strategy: always test with short windows first
Related Topics
- Tables & Data Model - Which tables require time selection
- NQL Basics - Understanding query structure with time windows
- where - Filtering Data - Combining time windows with filters for performance
- Query Performance Guide - Optimizing time windows for speed
Additional Resources
- NQL Syntax Cheat Sheet - Quick time selection reference
- Common Query Templates - Templates showing time selection patterns
- Common Error Messages - "Exceeded retention limit" troubleshooting