Skip to content

Time Selection

Note

Understanding time windows in NQL - the critical concept that past 2dpast 48h due to different data resolution and timezone handling.

Overview

Time selection is mandatory for all event table queries in NQL. But beyond being required syntax, time selection fundamentally affects:

  1. Data resolution - Daily aggregates vs 5-15 minute samples
  2. Timezone reference - Cloud instance timezone vs user's browser timezone
  3. Data availability - Retention limits vary by resolution
  4. Query performance - Shorter windows = faster queries

Critical Misconception

past 2d is NOT the same as past 48h - even though 2 days = 48 hours mathematically!

They return different data based on: - Different resolutions (daily vs hourly samples) - Different timezone references (cloud vs user) - Different precision (calendar days vs exact hours)

Basic Syntax

/* Daily resolution (low-res) - Cloud timezone */
<table> during past <N>d

/* Hourly/minute resolution (high-res) - User timezone */
<table> during past <N>h
<table> during past <N>min

Examples:

/* 7 days of daily-aggregated data */
execution.events during past 7d

/* 48 hours of high-resolution samples (5-15 min intervals) */
execution.events during past 48h

/* 30 minutes of very recent data */
execution.events during past 30min

Two Time Selection Types

Type 1: Daily Resolution (past Xd)

Syntax: during past 7d, during past 30d

Characteristics:

  • Resolution: Daily aggregated data (one data point per day)
  • Timezone: Cloud instance timezone (configured by admin)
  • Precision: Full calendar days (00:00:00 to 23:59:59)
  • Retention: Up to 30 days for most tables
  • Best for: Trends, KPIs, weekly/monthly reports, executive dashboards

Example:

/* Crash trend over 30 days (daily aggregation) */
execution.crashes during past 30d
| summarize crash_count = count() by 1d
| sort start_time asc
/* Returns 30 data points (one per day) */

Type 2: Hourly/Minute Resolution (past Xh, past Xmin)

Syntax: during past 48h, during past 24h, during past 30min

Characteristics:

  • Resolution: High-resolution samples (5-15 minute intervals)
  • Timezone: User's browser timezone
  • Precision: Exact hour/minute intervals from current time
  • Retention: Up to 8 days for execution.events/connection.events, 30 days for others
  • Best for: Incident investigation, troubleshooting, real-time monitoring

Example:

/* Detailed CPU usage over last 24 hours (hourly trend) */
execution.events during past 24h
| summarize avg_cpu = cpu_time.avg() by 1h
| sort start_time asc
/* Returns 24 data points (one per hour) with 5-15 min sample precision */

Why past 2dpast 48h

Even though 2 days mathematically equals 48 hours, NQL treats them very differently.

Real-world scenario:

You're in CET (Central European Time) querying a Nexthink cloud instance in ET (Eastern Time) on Nov 11 at 11:26 AM CET.

Query 1: past 2d (Daily resolution, cloud timezone)

execution.events during past 2d
| summarize count = count()

Time range returned:

  • Cloud timezone (ET): Nov 10 00:00:00 – Nov 12 00:00:00 ET
  • Your timezone (CET): Nov 10 06:00:00 – Nov 12 06:00:00 CET

Data: Daily aggregated samples (2 data points)

Query 2: past 48h (Hourly resolution, user timezone)

execution.events during past 48h
| summarize count = count()

Time range returned:

  • Cloud timezone (ET): Nov 9 06:00:00 – Nov 11 06:00:00 ET
  • Your timezone (CET): Nov 9 12:00:00 – Nov 11 12:00:00 CET

Data: High-resolution samples taken every 5-15 minutes (192-576 samples)

Comparison Table

Aspect past 2d past 48h
Data resolution Daily aggregates 5-15 min samples
Timezone Cloud instance (ET) User browser (CET)
Start time (ET) Nov 10 00:00:00 Nov 9 06:00:00
Start time (CET) Nov 10 06:00:00 Nov 9 12:00:00
Data points ~2 ~192-576
Time span difference 6 hour offset! Different start time!

Same Query Time, Different Data Ranges!

Queries run at the exact same moment return different time ranges and different data resolution.

Never mix past Xd and past Xh within the same investigation - you'll be comparing different time periods!

Data Retention Limits

Not all tables retain data equally. Retention varies by table and resolution.

execution.events & connection.events (Limited Retention)

These are the largest tables due to high sample volume:

Resolution Time Format Max Retention Example
High-res past Xh, past Xmin 8 days past 48hpast 15d
Low-res past Xd 30 days past 7dpast 60d

Examples:

/* ✅ Works - within 8-day high-res limit */
execution.events during past 7d

/* ✅ Works - within 8-day high-res limit */
execution.events during past 48h

/* ❌ May fail - exceeds 8-day high-res retention */
execution.events during past 15d

/* ✅ Works - 30-day low-res retention */
execution.events during past 30d

Other Event Tables (Standard Retention)

Most other event tables have 30-day retention for both resolutions:

  • device_performance.events
  • device_performance.boots
  • web.events
  • web.page_views
  • execution.crashes
/* ✅ Both work - 30-day retention for both resolutions */
device_performance.events during past 30d
device_performance.events during past 48h

Object Tables (No Time Required)

Object tables represent current state and don't require time selection:

  • devices
  • users
  • applications
  • binaries
/* No time selection needed */
devices
| list device.name, operating_system.name

When to Use Each Format

Use past Xd (Days) For:

Long-term trends and KPIs

/* 30-day crash trend for executive dashboard */
execution.crashes during past 30d
| summarize crash_count = count() by 1d, application.name
| sort start_time asc

Weekly/monthly reports

/* Weekly device performance averages */
devices during past 7d
| include device_performance.events
| compute avg_cpu = cpu_usage.avg()
| list device.name, avg_cpu

Baseline comparisons

/* Compare this week to last week */
execution.events during past 7d
| summarize avg_memory = real_memory.avg()

When you need data beyond 8 days

/* Must use daily resolution for 30-day queries */
execution.events during past 30d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by 1d

Use past Xh (Hours) For:

Incident investigation

/* Detailed analysis of recent crash spike */
execution.crashes during past 24h
| where application.name == "Outlook"
| summarize crash_count = count() by 1h
| sort start_time asc

Performance troubleshooting

/* Identify exact time of CPU spike */
execution.events during past 48h
| where device.name == "LAPTOP-001"
| summarize peak_cpu = cpu_time.avg() by 1h
| sort peak_cpu desc

Real-time monitoring

/* Last hour activity */
execution.events during past 1h
| where binary.name == "sensedlpprocessor.exe"
| summarize connections = number_of_established_connections.sum()

When precision matters

/* Exact timing of network issue (5-15 min precision) */
connection.events during past 6h
| where event.destination.domain == "service.company.com"
| summarize failure_rate = event.failed_connection_ratio.avg() by 15min
| sort start_time asc

Real-World Examples

Example: Comparing Daily vs Hourly Analysis

Scenario: Investigating Outlook crashes - should you use days or hours?

Daily view (trend analysis):

/* See crash pattern over time (KPI tracking) */
execution.crashes during past 30d
| where binary.binary.name == "outlook.exe"
| summarize crash_count = count() by 1d
| sort start_time asc
Output: 30 data points showing daily crash counts Use for: Identifying patterns (crashes worse on Mondays? Increasing over time?)

Hourly view (incident investigation):

/* Pinpoint exact timing of crash spike */
execution.crashes during past 48h
| where binary.binary.name == "outlook.exe"
| summarize crash_count = count() by 1h
| sort start_time asc
Output: 48 data points showing hourly crash counts Use for: Finding root cause (crashes spike after 9 AM? Related to login?)

Example: Timezone Impact on Results

Scenario: You're in London (GMT), cloud instance in US East (ET) - 5 hour difference.

Query run at 2:00 PM GMT on Nov 15:

Using past 1d (cloud timezone ET):

execution.events during past 1d
| summarize count = count()
- Returns: Nov 14 00:00 – Nov 15 00:00 ET - Your time: Nov 14 05:00 – Nov 15 05:00 GMT - Missing: Last 9 hours of your workday (5:00 GMT – 14:00 GMT)

Using past 24h (user timezone GMT):

execution.events during past 24h
| summarize count = count()
- Returns: Nov 14 14:00 – Nov 15 14:00 GMT - Includes: Your full current workday up to 2:00 PM

Lesson: Use past 24h for "last day from now", use past 1d for "yesterday (calendar day)"

Example: Retention Limit Error

Scenario: Query fails with "exceeded retention limit"

Failed query:

/* ❌ Exceeds 8-day high-res limit */
execution.events during past 15d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
Error: "Data retention limit exceeded for high-resolution query"

Fix - Option 1: Reduce time window

/* ✅ Within 8-day limit */
execution.events during past 7d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name

Fix - Option 2: Use daily resolution

/* ✅ Daily resolution supports 30 days */
execution.events during past 15d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
/* Note: Less precision but covers longer period */

Example: Development Testing Strategy

Scenario: Building a complex query - start small, expand gradually

Stage 1: Test with minimal data (fast)

execution.events during past 1h  # Very short window
| where binary.name == "outlook.exe"
| limit 10
/* Validates filter works, runs in <1 second */

Stage 2: Add aggregations, keep short window

execution.events during past 6h  # Expand slightly
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
| limit 10
/* Validates aggregation logic */

Stage 3: Expand to production time window

execution.events during past 7d  # Full production window
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg() by device.name
| sort avg_cpu desc
| limit 20
/* Ready for dashboard */

Time Selection Reference Table

Time Format Resolution Timezone Retention (exec/conn) Retention (other) Best For
past 1d Daily aggregate Cloud 30 days 30 days Yesterday's data
past 7d Daily aggregate Cloud 30 days 30 days Weekly trends
past 30d Daily aggregate Cloud 30 days 30 days Monthly KPIs
past 1h 5-15 min samples User 8 days 30 days Last hour activity
past 6h 5-15 min samples User 8 days 30 days Morning/afternoon period
past 24h 5-15 min samples User 8 days 30 days Recent troubleshooting
past 48h 5-15 min samples User 8 days 30 days Detailed investigation
past 30min 5-15 min samples User 8 days 30 days Real-time monitoring

Common Patterns

Pattern: Daily Trend Analysis

/* Track metric over time (chronological chart) */
<event_table> during past 30d
| where <filter>
| summarize metric = field.avg() by 1d
| sort start_time asc

Pattern: Hourly Spike Investigation

/* Pinpoint exact timing of issue */
<event_table> during past 48h
| where <filter>
| summarize metric = field.max() by 1h
| sort metric desc
| limit 5

Pattern: Recent Real-Time Check

/* Check current state (last hour) */
<event_table> during past 1h
| where <filter>
| summarize current_value = field.last()

Pattern: Baseline Comparison

/* Compare this week to prior weeks */
<event_table> during past 7d
| where <filter>
| summarize avg_metric = field.avg()

Tips & Tricks

Start Small, Expand Gradually

During query development, always start with short time windows:

/* Development - fast testing */
execution.events during past 1h
| where binary.name == "outlook.exe"
| limit 10

/* Production - full dataset */
execution.events during past 7d
| where binary.name == "outlook.exe"
| summarize avg_cpu = cpu_time.avg()

Use Consistent Time Format Within Investigation

Don't mix daily and hourly formats when comparing data:

/* ❌ Inconsistent - comparing different time periods! */
/* Query 1: */
execution.events during past 2d

/* Query 2: */
execution.events during past 48h
/* These return DIFFERENT data! */

/* ✅ Consistent - use same format */
execution.events during past 7d  # Both use daily
execution.events during past 7d

Know Your Retention Limits

execution.events and connection.events:

  • Daily (past Xd): 30 days ✅
  • Hourly (past Xh): 8 days ⚠️

Other tables:

  • Both formats: 30 days ✅

Time Selection Affects Performance

Shorter windows = faster queries:

/* Slow - 30 days of data */
execution.events during past 30d
| summarize count by binary.name
/* ~2-5 seconds */

/* Fast - 7 days of data */
execution.events during past 7d
| summarize count by binary.name
/* ~1-2 seconds */

/* Very fast - 1 day of data */
execution.events during past 1d
| summarize count by binary.name
/* <1 second */

Common Mistake: Assuming Days = Hours

/* ❌ WRONG assumption */
/* "I want the last 2 days, so I'll use past 48h" */
execution.events during past 48h
/* This uses user timezone, high-res data */
/* NOT the same as 2 calendar days! */

/* ✅ CORRECT - for 2 calendar days */
execution.events during past 2d
/* This uses cloud timezone, daily aggregates */

Common Mistake: Exceeding Retention Without Knowing

/* ❌ Will fail - exceeds 8-day limit */
execution.events during past 10d
| summarize avg = cpu_time.avg()
/* Error: "Exceeded retention limit" */

/* ✅ Correct - within limits */
execution.events during past 7d
| summarize avg = cpu_time.avg()

Timezone Gotcha: "Yesterday" Depends on Timezone

If you're in CET and cloud is ET (6h behind):

/* Query at 2:00 AM CET on Nov 16 */
execution.events during past 1d
/* Returns: Nov 15 00:00 – Nov 16 00:00 ET */
/* In CET: Nov 15 06:00 – Nov 16 06:00 CET */
/* ⚠️ This is NOT "yesterday" in your timezone! */

Performance Considerations

Time window is one of the biggest performance factors:

  1. Shorter windows = faster queries
  2. past 1d is ~7x faster than past 7d
  3. past 7d is ~4x faster than past 30d

  4. Daily resolution faster than hourly for same time span

  5. past 7d (7 samples) faster than past 168h (672-2016 samples)

  6. Combine with early filtering for best performance

    /* Fast - filter early + short window */
    execution.events during past 1d
    | where binary.name == "outlook.exe"
    | summarize avg_cpu = cpu_time.avg()
    /* Runs in <1 second */
    

  7. Development strategy: always test with short windows first

    /* Step 1: Test with 1 hour */
    execution.events during past 1h | ...
    
    /* Step 2: Expand to 1 day */
    execution.events during past 1d | ...
    
    /* Step 3: Production with 7 days */
    execution.events during past 7d | ...
    

Additional Resources