Restore accidentally removed files, a few additional import/calculation fixes

This commit is contained in:
2026-02-09 10:19:35 -05:00
parent 6aefc1b40d
commit 38b12c188f
209 changed files with 69925 additions and 412 deletions

346
docs/METRICS_AUDIT.md Normal file
View File

@@ -0,0 +1,346 @@
# Metrics Calculation Pipeline Audit
**Date:** 2026-02-07
**Scope:** All 6 SQL calculation scripts, custom DB functions, import pipeline, and live data verification
## Overview
The metrics pipeline in `inventory-server/scripts/calculate-metrics-new.js` runs 6 SQL scripts sequentially:
1. `update_daily_snapshots.sql` — Aggregates daily per-product sales/receiving data
2. `update_product_metrics.sql` — Calculates the main product_metrics table (KPIs, forecasting, status)
3. `update_periodic_metrics.sql` — ABC classification, average lead time
4. `calculate_brand_metrics.sql` — Brand-level aggregated metrics
5. `calculate_vendor_metrics.sql` — Vendor-level aggregated metrics
6. `calculate_category_metrics.sql` — Category-level metrics with hierarchy rollups
### Database Scale
| Table | Row Count |
|---|---|
| products | 681,912 |
| orders | 2,883,982 |
| purchase_orders | 256,809 |
| receivings | 313,036 |
| daily_product_snapshots | 678,312 (601 distinct dates, since 2024-06-01) |
| product_metrics | 681,912 |
| brand_metrics | 1,789 |
| vendor_metrics | 281 |
| category_metrics | 610 |
---
## Issues Found
### ISSUE 1: [HIGH] Order status filter is non-functional — numeric codes vs text comparison
**Files:** `update_daily_snapshots.sql` lines 86-101, `update_product_metrics.sql` lines 89, 178-183
**Confirmed by data:** All order statuses are numeric strings ('100', '50', '55', etc.)
**Status mappings from:** `docs/prod_registry.class.php`
**Description:** The SQL filters `COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned')` and `o.status NOT IN ('canceled', 'returned')` are used throughout the pipeline to exclude canceled/returned orders. However, the import pipeline stores order statuses as their **raw numeric codes** from the production MySQL database (e.g., '100', '50', '55', '90', '92'). There are **zero text status values** in the orders table.
This means these filters **never exclude any rows** — every comparison is `'100' NOT IN ('canceled', 'returned')` which is always true.
**Actual status distribution (with confirmed meanings):**
| Status | Meaning | Count | Negative Qty | Assessment |
|---|---|---|---|---|
| 100 | shipped | 2,862,792 | 3,352 | Completed — correct to include |
| 50 | awaiting_products | 11,109 | 0 | In-progress — not yet shipped |
| 55 | shipping_later | 5,689 | 0 | In-progress — not yet shipped |
| 56 | shipping_together | 2,863 | 0 | In-progress — not yet shipped |
| 90 | awaiting_shipment | 38 | 0 | Near-complete — not yet shipped |
| 92 | awaiting_pickup | 71 | 0 | Near-complete — awaiting customer |
| 95 | shipped_confirmed | 5 | 0 | Completed — correct to include |
| 15 | cancelled | 1 | 0 | Should be excluded |
**Full status reference (from prod_registry.class.php):**
- 0=created, 10=unfinished, **15=cancelled**, 16=combined, 20=placed, 22=placed_incomplete
- 30=cancelled_old (historical), 40=awaiting_payment, 50=awaiting_products
- 55=shipping_later, 56=shipping_together, 60=ready, 61=flagged
- 62=fix_before_pick, 65=manual_picking, 70=in_pt, 80=picked
- 90=awaiting_shipment, 91=remote_wait, **92=awaiting_pickup**, 93=fix_before_ship
- **95=shipped_confirmed**, **100=shipped**
**Severity revised to HIGH (from CRITICAL):** Now that we know the actual meanings, no cancelled/refunded orders are being miscounted (only 1 cancelled order exists, status=15). The real concern is twofold:
1. **The text-based filter is dead code** — it can never match any row. Either map statuses to text during import (like POs do) or change SQL to use numeric comparisons.
2. **~19,775 unfulfilled orders** (statuses 50/55/56/90/92) are counted as completed sales. These are orders in various stages of fulfillment that haven't shipped yet. While most will eventually ship, counting them now inflates current-period metrics. At 0.69% of total orders, the financial impact is modest but the filter should work correctly on principle.
**Note:** PO statuses ARE properly mapped to text ('canceled', 'done', etc.) in the import pipeline. Only order statuses are numeric.
---
### ISSUE 2: [CRITICAL] Daily Snapshots use current stock instead of historical EOD stock
**File:** `update_daily_snapshots.sql`, lines 126-135, 173
**Confirmed by data:** Top product (pid 666925) shows `eod_stock_quantity = 0` for ALL dates even though it sold 28 units on Jan 28 (clearly had stock then)
**Description:** The `CurrentStock` CTE reads `stock_quantity` directly from the `products` table at query execution time. When the script processes historical dates (today minus 1-4 days), it writes **today's stock** as if it were the end-of-day stock for those past dates.
**Cascading impact on product_metrics:**
- `avg_stock_units_30d` / `avg_stock_cost_30d` — Wrong averages
- `stockout_days_30d` — Undercounts (only based on current stock state, not historical)
- `stockout_rate_30d`, `service_level_30d`, `fill_rate_30d` — All derived from wrong stockout data
- `gmroi_30d` — Wrong denominator (avg stock cost)
- `stockturn_30d` — Wrong denominator (avg stock units)
- `sell_through_30d` — Affected by stock level inaccuracy
---
### ISSUE 3: [CRITICAL] Snapshot coverage is 0.17% — most products have no snapshot data
**Confirmed by data:** 678,312 snapshot rows across 601 dates = ~1,128 products/day out of 681,912 total
**Description:** The daily snapshots script only creates rows for products with sales or receiving activity on that date (`ProductsWithActivity` CTE, line 136). This means:
- 91.1% of products (621,221) have NULL `sales_30d` — they had no orders in the last 30 days so no snapshot rows exist
- `AVG(eod_stock_quantity)` averages only across days with activity, not 30 days
- `stockout_days_30d` only counts stockout days where there was ALSO some activity
- A product out of stock with zero sales gets zero stockout_days even though it was stocked out
This is by design (to avoid creating 681K rows/day) but means stock-related metrics are systematically biased.
---
### ISSUE 4: [HIGH] `costeach` fallback to 50% of price in import pipeline
**File:** `inventory-server/scripts/import/orders.js` (line ~573)
**Description:** When the MySQL `order_costs` table has no record for an order item, `costeach` defaults to `price * 0.5`. There is **no flag** in the PostgreSQL data to distinguish actual costs from estimated ones.
**Data impact:** 385,545 products (56.5%) have `current_cost_price = 0` AND `current_landing_cost_price = 0`. For these products, the COGS calculation in daily_snapshots falls through the chain:
1. `o.costeach` — May be the 50% estimate from import
2. `get_weighted_avg_cost()` — Returns NULL if no receivings exist
3. `p.landing_cost_price` — Always NULL (hardcoded in import)
4. `p.cost_price` — 0 for 56.5% of products
Only 27 products have zero COGS with positive sales, meaning the `costeach` field is doing its job for products that sell, but the 50% fallback means margins for those products are estimates, not actuals.
---
### ISSUE 5: [HIGH] `landing_cost_price` is always NULL
**File:** `inventory-server/scripts/import/products.js` (line ~175)
**Description:** The import explicitly sets `landing_cost_price = NULL` for all products. The daily_snapshots COGS calculation uses it as a fallback: `COALESCE(o.costeach, get_weighted_avg_cost(...), p.landing_cost_price, p.cost_price)`. Since it's always NULL, this fallback step is useless and the chain jumps straight to `cost_price`.
The `product_metrics` field `current_landing_cost_price` is populated as `COALESCE(p.landing_cost_price, p.cost_price, 0.00)`, so it equals `cost_price` for all products. Any UI showing "landing cost" is actually just showing `cost_price`.
---
### ISSUE 6: [HIGH] Vendor lead time is drastically wrong — missing supplier_id join
**File:** `calculate_vendor_metrics.sql`, lines 62-82
**Confirmed by data:** Vendor-level lead times are 2-10x higher than product-level lead times
**Description:** The vendor metrics lead time joins POs to receivings only by `pid`:
```sql
LEFT JOIN public.receivings r ON r.pid = po.pid
```
But the periodic metrics lead time correctly matches supplier:
```sql
JOIN public.receivings r ON r.pid = po.pid AND r.supplier_id = po.supplier_id
```
Without supplier matching, a PO for product X from Vendor A can match a receiving of product X from Vendor B, creating inflated/wrong lead times.
**Measured discrepancies:**
| Vendor | Vendor Metrics Lead Time | Avg Product Lead Time |
|---|---|---|
| doodlebug design inc. | 66 days | 14 days |
| Notions | 55 days | 4 days |
| Simple Stories | 59 days | 27 days |
| Ranger Industries | 31 days | 5 days |
---
### ISSUE 7: [MEDIUM] Net revenue does not subtract returns
**File:** `update_daily_snapshots.sql`, line 184
**Description:** `net_revenue = gross_revenue - discounts`. Standard accounting: `net_revenue = gross_revenue - discounts - returns`. The `returns_revenue` is calculated separately but not deducted.
**Data impact:** There are 3,352 orders with negative quantities (returns), totaling -5,499 units. These returns are tracked in `returns_revenue` but not reflected in `net_revenue`, which means all downstream revenue-based metrics are slightly overstated.
---
### ISSUE 8: [MEDIUM] Lifetime revenue subquery references wrong table columns
**File:** `update_product_metrics.sql`, lines 323-329
**Description:** The lifetime revenue estimation fallback queries:
```sql
SELECT revenue_7d / NULLIF(sales_7d, 0)
FROM daily_product_snapshots
WHERE pid = ci.pid AND sales_7d > 0
```
But `daily_product_snapshots` does NOT have `revenue_7d` or `sales_7d` columns — those exist in `product_metrics`. This subquery either errors silently or returns NULL. The effect is that the estimation always falls back to `current_price * total_sold`.
---
### ISSUE 9: [MEDIUM] Brand/Vendor metrics COGS filter inflates margins
**Files:** `calculate_brand_metrics.sql` lines 31, `calculate_vendor_metrics.sql` line 32
**Description:** `SUM(CASE WHEN pm.cogs_30d > 0 THEN pm.cogs_30d ELSE 0 END)` excludes products with zero COGS. But if a product has sales revenue and zero COGS (missing cost data), the brand/vendor totals will include the revenue but not the COGS, artificially inflating the margin.
**Data context:** Brand metrics revenue matches product_metrics aggregation exactly for sales counts, but shows small discrepancies in revenue (e.g., Stamperia: $7,613.98 brand vs $7,611.11 actual). These tiny diffs come from the `> 0` filtering excluding products with negative revenue.
---
### ISSUE 10: [MEDIUM] Extreme margin values from $0.01 price orders
**Confirmed by data:** 73 products with margin > 100%, 119 with margin < -100%
**Examples:**
| Product | Revenue | COGS | Margin |
|---|---|---|---|
| Flower Gift Box Die (pid 624756) | $0.02 | $29.98 | -149,800% |
| Special Flowers Stamp Set (pid 614513) | $0.01 | $11.97 | -119,632% |
These are products with extremely low prices (likely samples, promos, or data errors) where the order price was $0.01. The margin calculation is mathematically correct but these outliers skew any aggregate margin statistics.
---
### ISSUE 11: [MEDIUM] Sell-through rate has edge cases yielding negative/extreme values
**File:** `update_product_metrics.sql`, lines 358-361
**Confirmed by data:** 30 products with negative sell-through, 10 with sell-through > 200%
**Description:** Beginning inventory is approximated as `current_stock + sales - received + returns`. When inventory adjustments, shrinkage, or manual corrections occur, this approximation breaks. Edge cases:
- Products with many manual stock adjustments → negative denominator → negative sell-through
- Products with beginning stock near zero but decent sales → sell-through > 100%
---
### ISSUE 12: [MEDIUM] `total_sold` uses different status filter than orders import
**Import pipeline confirmed:**
- Orders import: `order_status >= 15` (includes processing/pending orders)
- `total_sold` in products: `order_status >= 20` (more restrictive)
This means `lifetime_sales` (from `total_sold`) is systematically lower than what you'd calculate by summing the orders table. The discrepancy is confirmed:
| Product | total_sold | orders sum | Gap |
|---|---|---|---|
| pid 31286 | 13,786 | 4,241 | 9,545 |
| pid 44309 | 11,978 | 3,119 | 8,859 |
The large gaps are because the orders table only has data from the import start date (~2024), while `total_sold` includes all-time sales from MySQL. This is expected behavior, not a bug, but it means the `lifetime_revenue_quality` flag is important — most products show 'estimated' quality.
---
### ISSUE 13: [MEDIUM] Category rollup may double-count products in multiple hierarchy levels
**File:** `calculate_category_metrics.sql`, lines 42-66
**Description:** The `RolledUpMetrics` CTE uses:
```sql
dcm.cat_id = ch.cat_id OR dcm.cat_id = ANY(SELECT cat_id FROM category_hierarchy WHERE ch.cat_id = ANY(ancestor_ids))
```
If products are assigned to categories at multiple levels in the same branch (e.g., both "Paper Crafts" and "Scrapbook Paper" which is a child of "Paper Crafts"), those products' metrics would be counted twice in the parent's rollup.
---
### ISSUE 14: [LOW] `exclude_forecast` removes products from metrics entirely
**File:** `update_product_metrics.sql`, line 509
**Description:** `WHERE s.exclude_forecast IS FALSE OR s.exclude_forecast IS NULL` is on the main INSERT's WHERE clause. Products with `exclude_forecast = TRUE` won't appear in `product_metrics` at all, rather than just having forecast fields nulled. Currently all 681,912 products are in product_metrics so this appears to not affect any products yet.
---
### ISSUE 15: [LOW] Daily snapshots only look back 5 days
**File:** `update_daily_snapshots.sql`, line 14 — `_process_days INT := 5`
If import data arrives late (>5 days), those days will never get snapshots populated. There is a separate `backfill/rebuild_daily_snapshots.sql` for historical rebuilds.
---
### ISSUE 16: [INFO] Timezone risk in order date import
**File:** `inventory-server/scripts/import/orders.js`
MySQL `DATETIME` values are timezone-naive. The import uses `new Date(order.date)` which interprets them using the import server's local timezone. The SSH config specifies `timezone: '-05:00'` for MySQL (always EST). If the import server is in a different timezone, orders near midnight could land on the wrong date in the daily snapshots calculation.
---
## Custom Functions Review
### `calculate_sales_velocity(sales_30d, stockout_days_30d)`
- Divides `sales_30d` by effective selling days: `GREATEST(30 - stockout_days, CASE WHEN sales > 0 THEN 14 ELSE 30 END)`
- The 14-day floor prevents extreme velocity for products mostly out of stock
- **Sound approach** — the only concern is that stockout_days is unreliable (Issues 2, 3)
### `get_weighted_avg_cost(pid, date)`
- Weighted average of last 10 receivings by cost*qty/qty
- Returns NULL if no receivings — sound fallback behavior
- **Correct implementation**
### `safe_divide(numerator, denominator)`
- Returns NULL on divide-by-zero — **correct**
### `std_numeric(value, precision)`
- Rounds to precision digits — **correct**
### `classify_demand_pattern(avg_demand, cv)`
- Uses coefficient of variation thresholds: ≤0.2 = stable, ≤0.5 = variable, low-volume+high-CV = sporadic, else lumpy
- **Reasonable classification**, though only based on 30-day window
### `detect_seasonal_pattern(pid)`
- CROSS JOIN LATERAL (runs per product) — **expensive**: queries `daily_product_snapshots` twice per product
- Compares current month average to yearly average — very simplistic
- **Functional but could be a performance bottleneck** with 681K products
### `category_hierarchy` (materialized view)
- Recursive CTE building tree from categories — **correct implementation**
- Refreshed concurrently before category metrics calculation — **good practice**
---
## Data Health Summary
| Metric | Count | % of Total |
|---|---|---|
| Products with zero cost_price | 385,545 | 56.5% |
| Products with NULL sales_30d | 621,221 | 91.1% |
| Products with no lifetime_sales | 321,321 | 47.1% |
| Products with zero COGS but positive sales | 27 | <0.01% |
| Products with margin > 100% | 73 | <0.01% |
| Products with margin < -100% | 119 | <0.01% |
| Products with negative sell-through | 30 | <0.01% |
| Products with NULL status | 0 | 0% |
| Duplicate daily snapshots (same pid+date) | 0 | 0% |
| Net revenue formula mismatches | 0 | 0% |
### ABC Classification Distribution (replenishable products only)
| Class | Products | Revenue % |
|---|---|---|
| A | 7,727 | 80.72% |
| B | 12,048 | 15.10% |
| C | 113,647 | 4.18% |
ABC distribution looks healthy — A ≈ 80%, A+B ≈ 96%.
### Brand Metrics Consistency
Product counts and sales_30d match exactly between `brand_metrics` and direct aggregation from `product_metrics`. Revenue shows sub-dollar discrepancies due to the `> 0` filter excluding products with negative revenue. **Consistent within expected tolerance.**
---
## Priority Recommendations
### Must Fix (Correctness Issues)
1. **Issue 1: Fix order status handling** — The text-based filter (`NOT IN ('canceled', 'returned')`) is dead code against numeric statuses. Two options: (a) map numeric statuses to text during import (like POs already do), or (b) change SQL to filter on numeric codes (e.g., `o.status::int >= 20` to exclude cancelled/unfinished, or `o.status IN ('100', '95')` for shipped-only). The ~19.7K unfulfilled orders (0.69%) are a minor financial impact but the filter should be functional.
2. **Issue 6: Add supplier_id join to vendor lead time** — One-line fix in `calculate_vendor_metrics.sql`
3. **Issue 8: Fix lifetime revenue subquery** — Use correct column names from `daily_product_snapshots` (e.g., `net_revenue / NULLIF(units_sold, 0)`)
### Should Fix (Data Quality)
4. **Issue 2/3: Snapshot coverage** — Consider creating snapshot rows for all in-stock products, not just those with activity. Or at minimum, calculate stockout metrics by comparing snapshot existence to product existence.
5. **Issue 5: Populate landing_cost_price** — If available in the source system, import it. Otherwise remove references to avoid confusion.
6. **Issue 7: Subtract returns from net_revenue**`net_revenue = gross_revenue - discounts - returns_revenue`
7. **Issue 9: Remove > 0 filter on COGS** — Use `SUM(pm.cogs_30d)` instead of conditional sums
### Nice to Fix (Edge Cases)
8. **Issue 4: Flag estimated costs** — Add a `costeach_estimated BOOLEAN` to orders during import
9. **Issue 10: Cap or flag extreme margins** — Exclude $0.01-price orders from margin calculations
10. **Issue 11: Clamp sell-through**`GREATEST(0, LEAST(sell_through_30d, 200))` or flag outliers
11. **Issue 12: Verify category assignment policy** — Check if products are assigned to leaf categories only
12. **Issue 13: Category rollup query** — Verify no double-counting with actual data

View File

@@ -0,0 +1,103 @@
require('dotenv').config({ path: '../.env' });
const bcrypt = require('bcrypt');
const { Pool } = require('pg');
const inquirer = require('inquirer');
// Log connection details for debugging (remove in production)
console.log('Attempting to connect with:', {
host: process.env.DB_HOST,
user: process.env.DB_USER,
database: process.env.DB_NAME,
port: process.env.DB_PORT
});
const pool = new Pool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
});
async function promptUser() {
const questions = [
{
type: 'input',
name: 'username',
message: 'Enter username:',
validate: (input) => {
if (input.length < 3) {
return 'Username must be at least 3 characters long';
}
return true;
}
},
{
type: 'password',
name: 'password',
message: 'Enter password:',
mask: '*',
validate: (input) => {
if (input.length < 8) {
return 'Password must be at least 8 characters long';
}
return true;
}
},
{
type: 'password',
name: 'confirmPassword',
message: 'Confirm password:',
mask: '*',
validate: (input, answers) => {
if (input !== answers.password) {
return 'Passwords do not match';
}
return true;
}
}
];
return inquirer.prompt(questions);
}
async function addUser() {
try {
// Get user input
const answers = await promptUser();
const { username, password } = answers;
// Hash password
const saltRounds = 10;
const hashedPassword = await bcrypt.hash(password, saltRounds);
// Check if user already exists
const checkResult = await pool.query(
'SELECT id FROM users WHERE username = $1',
[username]
);
if (checkResult.rows.length > 0) {
console.error('Error: Username already exists');
process.exit(1);
}
// Insert new user
const result = await pool.query(
'INSERT INTO users (username, password) VALUES ($1, $2) RETURNING id',
[username, hashedPassword]
);
console.log(`User ${username} created successfully with id ${result.rows[0].id}`);
} catch (error) {
console.error('Error creating user:', error);
console.error('Error details:', error.message);
if (error.code) {
console.error('Error code:', error.code);
}
} finally {
await pool.end();
}
}
addUser();

2275
inventory-server/auth/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
{
"name": "inventory-auth-server",
"version": "1.0.0",
"description": "Authentication server for inventory management system",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"bcrypt": "^5.1.1",
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"express": "^4.18.2",
"inquirer": "^8.2.6",
"jsonwebtoken": "^9.0.2",
"morgan": "^1.10.0",
"pg": "^8.11.3"
}
}

View File

@@ -0,0 +1,128 @@
// Get pool from global or create a new one if not available
let pool;
if (typeof global.pool !== 'undefined') {
pool = global.pool;
} else {
// If global pool is not available, create a new connection
const { Pool } = require('pg');
pool = new Pool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
});
console.log('Created new database pool in permissions.js');
}
/**
* Check if a user has a specific permission
* @param {number} userId - The user ID to check
* @param {string} permissionCode - The permission code to check
* @returns {Promise<boolean>} - Whether the user has the permission
*/
async function checkPermission(userId, permissionCode) {
try {
// First check if the user is an admin
const adminResult = await pool.query(
'SELECT is_admin FROM users WHERE id = $1',
[userId]
);
// If user is admin, automatically grant permission
if (adminResult.rows.length > 0 && adminResult.rows[0].is_admin) {
return true;
}
// Otherwise check for specific permission
const result = await pool.query(
`SELECT COUNT(*) AS has_permission
FROM user_permissions up
JOIN permissions p ON up.permission_id = p.id
WHERE up.user_id = $1 AND p.code = $2`,
[userId, permissionCode]
);
return result.rows[0].has_permission > 0;
} catch (error) {
console.error('Error checking permission:', error);
return false;
}
}
/**
* Middleware to require a specific permission
* @param {string} permissionCode - The permission code required
* @returns {Function} - Express middleware function
*/
function requirePermission(permissionCode) {
return async (req, res, next) => {
try {
// Check if user is authenticated
if (!req.user || !req.user.id) {
return res.status(401).json({ error: 'Authentication required' });
}
const hasPermission = await checkPermission(req.user.id, permissionCode);
if (!hasPermission) {
return res.status(403).json({
error: 'Insufficient permissions',
requiredPermission: permissionCode
});
}
next();
} catch (error) {
console.error('Permission middleware error:', error);
res.status(500).json({ error: 'Server error checking permissions' });
}
};
}
/**
* Get all permissions for a user
* @param {number} userId - The user ID
* @returns {Promise<string[]>} - Array of permission codes
*/
async function getUserPermissions(userId) {
try {
// Check if user is admin
const adminResult = await pool.query(
'SELECT is_admin FROM users WHERE id = $1',
[userId]
);
if (adminResult.rows.length === 0) {
return [];
}
const isAdmin = adminResult.rows[0].is_admin;
if (isAdmin) {
// Admin gets all permissions
const allPermissions = await pool.query('SELECT code FROM permissions');
return allPermissions.rows.map(p => p.code);
} else {
// Get assigned permissions
const permissions = await pool.query(
`SELECT p.code
FROM permissions p
JOIN user_permissions up ON p.id = up.permission_id
WHERE up.user_id = $1`,
[userId]
);
return permissions.rows.map(p => p.code);
}
} catch (error) {
console.error('Error getting user permissions:', error);
return [];
}
}
module.exports = {
checkPermission,
requirePermission,
getUserPermissions
};

View File

@@ -0,0 +1,533 @@
const express = require('express');
const router = express.Router();
const bcrypt = require('bcrypt');
const jwt = require('jsonwebtoken');
const { requirePermission, getUserPermissions } = require('./permissions');
// Get pool from global or create a new one if not available
let pool;
if (typeof global.pool !== 'undefined') {
pool = global.pool;
} else {
// If global pool is not available, create a new connection
const { Pool } = require('pg');
pool = new Pool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
});
console.log('Created new database pool in routes.js');
}
// Authentication middleware
const authenticate = async (req, res, next) => {
try {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({ error: 'Authentication required' });
}
const token = authHeader.split(' ')[1];
const decoded = jwt.verify(token, process.env.JWT_SECRET);
// Get user from database
const result = await pool.query(
'SELECT id, username, email, is_admin, rocket_chat_user_id FROM users WHERE id = $1',
[decoded.userId]
);
console.log('Database query result for user', decoded.userId, ':', result.rows[0]);
if (result.rows.length === 0) {
return res.status(401).json({ error: 'User not found' });
}
// Attach user to request
req.user = result.rows[0];
next();
} catch (error) {
console.error('Authentication error:', error);
res.status(401).json({ error: 'Invalid token' });
}
};
// Login route
router.post('/login', async (req, res) => {
try {
const { username, password } = req.body;
// Get user from database
const result = await pool.query(
'SELECT id, username, password, is_admin, is_active, rocket_chat_user_id FROM users WHERE username = $1',
[username]
);
if (result.rows.length === 0) {
return res.status(401).json({ error: 'Invalid username or password' });
}
const user = result.rows[0];
// Check if user is active
if (!user.is_active) {
return res.status(403).json({ error: 'Account is inactive' });
}
// Verify password
const validPassword = await bcrypt.compare(password, user.password);
if (!validPassword) {
return res.status(401).json({ error: 'Invalid username or password' });
}
// Update last login
await pool.query(
'UPDATE users SET last_login = CURRENT_TIMESTAMP WHERE id = $1',
[user.id]
);
// Generate JWT
const token = jwt.sign(
{ userId: user.id, username: user.username },
process.env.JWT_SECRET,
{ expiresIn: '8h' }
);
// Get user permissions
const permissions = await getUserPermissions(user.id);
res.json({
token,
user: {
id: user.id,
username: user.username,
is_admin: user.is_admin,
rocket_chat_user_id: user.rocket_chat_user_id,
permissions
}
});
} catch (error) {
console.error('Login error:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Get current user
router.get('/me', authenticate, async (req, res) => {
try {
// Get user permissions
const permissions = await getUserPermissions(req.user.id);
res.json({
id: req.user.id,
username: req.user.username,
email: req.user.email,
is_admin: req.user.is_admin,
rocket_chat_user_id: req.user.rocket_chat_user_id,
permissions,
// Debug info
_debug_raw_user: req.user,
_server_identifier: "INVENTORY_AUTH_SERVER_MODIFIED"
});
} catch (error) {
console.error('Error getting current user:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Get all users
router.get('/users', authenticate, requirePermission('view:users'), async (req, res) => {
try {
const result = await pool.query(`
SELECT id, username, email, is_admin, is_active, rocket_chat_user_id, created_at, last_login
FROM users
ORDER BY username
`);
res.json(result.rows);
} catch (error) {
console.error('Error getting users:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Get user with permissions
router.get('/users/:id', authenticate, requirePermission('view:users'), async (req, res) => {
try {
const userId = req.params.id;
// Get user details
const userResult = await pool.query(`
SELECT id, username, email, is_admin, is_active, rocket_chat_user_id, created_at, last_login
FROM users
WHERE id = $1
`, [userId]);
if (userResult.rows.length === 0) {
return res.status(404).json({ error: 'User not found' });
}
// Get user permissions
const permissionsResult = await pool.query(`
SELECT p.id, p.name, p.code, p.category, p.description
FROM permissions p
JOIN user_permissions up ON p.id = up.permission_id
WHERE up.user_id = $1
ORDER BY p.category, p.name
`, [userId]);
// Combine user and permissions
const user = {
...userResult.rows[0],
permissions: permissionsResult.rows
};
res.json(user);
} catch (error) {
console.error('Error getting user:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Create new user
router.post('/users', authenticate, requirePermission('create:users'), async (req, res) => {
const client = await pool.connect();
try {
const { username, email, password, is_admin, is_active, rocket_chat_user_id, permissions } = req.body;
console.log("Create user request:", {
username,
email,
is_admin,
is_active,
rocket_chat_user_id,
permissions: permissions || []
});
// Validate required fields
if (!username || !password) {
return res.status(400).json({ error: 'Username and password are required' });
}
// Check if username is taken
const existingUser = await client.query(
'SELECT id FROM users WHERE username = $1',
[username]
);
if (existingUser.rows.length > 0) {
return res.status(400).json({ error: 'Username already exists' });
}
// Start transaction
await client.query('BEGIN');
// Hash password
const saltRounds = 10;
const hashedPassword = await bcrypt.hash(password, saltRounds);
// Insert new user
// Convert rocket_chat_user_id to integer if provided
const rcUserId = rocket_chat_user_id ? parseInt(rocket_chat_user_id, 10) : null;
const userResult = await client.query(`
INSERT INTO users (username, email, password, is_admin, is_active, rocket_chat_user_id, created_at)
VALUES ($1, $2, $3, $4, $5, $6, CURRENT_TIMESTAMP)
RETURNING id
`, [username, email || null, hashedPassword, !!is_admin, is_active !== false, rcUserId]);
const userId = userResult.rows[0].id;
// Assign permissions if provided and not admin
if (!is_admin && Array.isArray(permissions) && permissions.length > 0) {
console.log("Adding permissions for new user:", userId);
console.log("Permissions received:", permissions);
// Check permission format
const permissionIds = permissions.map(p => {
if (typeof p === 'object' && p.id) {
console.log("Permission is an object with ID:", p.id);
return parseInt(p.id, 10);
} else if (typeof p === 'number') {
console.log("Permission is a number:", p);
return p;
} else if (typeof p === 'string' && !isNaN(parseInt(p, 10))) {
console.log("Permission is a string that can be parsed as a number:", p);
return parseInt(p, 10);
} else {
console.log("Unknown permission format:", typeof p, p);
// If it's a permission code, we need to look up the ID
return null;
}
}).filter(id => id !== null);
console.log("Filtered permission IDs:", permissionIds);
if (permissionIds.length > 0) {
const permissionValues = permissionIds
.map(permId => `(${userId}, ${permId})`)
.join(',');
console.log("Inserting permission values:", permissionValues);
try {
await client.query(`
INSERT INTO user_permissions (user_id, permission_id)
VALUES ${permissionValues}
ON CONFLICT DO NOTHING
`);
console.log("Successfully inserted permissions for new user:", userId);
} catch (err) {
console.error("Error inserting permissions for new user:", err);
throw err;
}
} else {
console.log("No valid permission IDs found to insert for new user");
}
} else {
console.log("Not adding permissions: is_admin =", is_admin, "permissions array:", Array.isArray(permissions), "length:", permissions ? permissions.length : 0);
}
await client.query('COMMIT');
res.status(201).json({
id: userId,
message: 'User created successfully'
});
} catch (error) {
await client.query('ROLLBACK');
console.error('Error creating user:', error);
res.status(500).json({ error: 'Server error' });
} finally {
client.release();
}
});
// Update user
router.put('/users/:id', authenticate, requirePermission('edit:users'), async (req, res) => {
const client = await pool.connect();
try {
const userId = req.params.id;
const { username, email, password, is_admin, is_active, rocket_chat_user_id, permissions } = req.body;
console.log("Update user request:", {
userId,
username,
email,
is_admin,
is_active,
rocket_chat_user_id,
permissions: permissions || []
});
// Check if user exists
const userExists = await client.query(
'SELECT id FROM users WHERE id = $1',
[userId]
);
if (userExists.rows.length === 0) {
return res.status(404).json({ error: 'User not found' });
}
// Start transaction
await client.query('BEGIN');
// Build update fields
const updateFields = [];
const updateValues = [userId]; // First parameter is the user ID
let paramIndex = 2;
if (username !== undefined) {
updateFields.push(`username = $${paramIndex++}`);
updateValues.push(username);
}
if (email !== undefined) {
updateFields.push(`email = $${paramIndex++}`);
updateValues.push(email || null);
}
if (is_admin !== undefined) {
updateFields.push(`is_admin = $${paramIndex++}`);
updateValues.push(!!is_admin);
}
if (is_active !== undefined) {
updateFields.push(`is_active = $${paramIndex++}`);
updateValues.push(!!is_active);
}
if (rocket_chat_user_id !== undefined) {
updateFields.push(`rocket_chat_user_id = $${paramIndex++}`);
// Convert to integer if not null/undefined, otherwise null
const rcUserId = rocket_chat_user_id ? parseInt(rocket_chat_user_id, 10) : null;
updateValues.push(rcUserId);
}
// Update password if provided
if (password) {
const saltRounds = 10;
const hashedPassword = await bcrypt.hash(password, saltRounds);
updateFields.push(`password = $${paramIndex++}`);
updateValues.push(hashedPassword);
}
// Update user if there are fields to update
if (updateFields.length > 0) {
updateFields.push(`updated_at = CURRENT_TIMESTAMP`);
await client.query(`
UPDATE users
SET ${updateFields.join(', ')}
WHERE id = $1
`, updateValues);
}
// Update permissions if provided
if (Array.isArray(permissions)) {
console.log("Updating permissions for user:", userId);
console.log("Permissions received:", permissions);
// First remove existing permissions
await client.query(
'DELETE FROM user_permissions WHERE user_id = $1',
[userId]
);
console.log("Deleted existing permissions for user:", userId);
// Add new permissions if any and not admin
const newIsAdmin = is_admin !== undefined ? is_admin : (await client.query('SELECT is_admin FROM users WHERE id = $1', [userId])).rows[0].is_admin;
console.log("User is admin:", newIsAdmin);
if (!newIsAdmin && permissions.length > 0) {
console.log("Adding permissions:", permissions);
// Check permission format
const permissionIds = permissions.map(p => {
if (typeof p === 'object' && p.id) {
console.log("Permission is an object with ID:", p.id);
return parseInt(p.id, 10);
} else if (typeof p === 'number') {
console.log("Permission is a number:", p);
return p;
} else if (typeof p === 'string' && !isNaN(parseInt(p, 10))) {
console.log("Permission is a string that can be parsed as a number:", p);
return parseInt(p, 10);
} else {
console.log("Unknown permission format:", typeof p, p);
// If it's a permission code, we need to look up the ID
return null;
}
}).filter(id => id !== null);
console.log("Filtered permission IDs:", permissionIds);
if (permissionIds.length > 0) {
const permissionValues = permissionIds
.map(permId => `(${userId}, ${permId})`)
.join(',');
console.log("Inserting permission values:", permissionValues);
try {
await client.query(`
INSERT INTO user_permissions (user_id, permission_id)
VALUES ${permissionValues}
ON CONFLICT DO NOTHING
`);
console.log("Successfully inserted permissions for user:", userId);
} catch (err) {
console.error("Error inserting permissions:", err);
throw err;
}
} else {
console.log("No valid permission IDs found to insert");
}
}
}
await client.query('COMMIT');
res.json({ message: 'User updated successfully' });
} catch (error) {
await client.query('ROLLBACK');
console.error('Error updating user:', error);
res.status(500).json({ error: 'Server error' });
} finally {
client.release();
}
});
// Delete user
router.delete('/users/:id', authenticate, requirePermission('delete:users'), async (req, res) => {
try {
const userId = req.params.id;
// Check that user is not deleting themselves
if (req.user.id === parseInt(userId, 10)) {
return res.status(400).json({ error: 'Cannot delete your own account' });
}
// Delete user (this will cascade to user_permissions due to FK constraints)
const result = await pool.query(
'DELETE FROM users WHERE id = $1 RETURNING id',
[userId]
);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'User not found' });
}
res.json({ message: 'User deleted successfully' });
} catch (error) {
console.error('Error deleting user:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Get all permissions grouped by category
router.get('/permissions/categories', authenticate, requirePermission('view:users'), async (req, res) => {
try {
const result = await pool.query(`
SELECT category, json_agg(
json_build_object(
'id', id,
'name', name,
'code', code,
'description', description
) ORDER BY name
) as permissions
FROM permissions
GROUP BY category
ORDER BY category
`);
res.json(result.rows);
} catch (error) {
console.error('Error getting permissions:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Get all permissions
router.get('/permissions', authenticate, requirePermission('view:users'), async (req, res) => {
try {
const result = await pool.query(`
SELECT *
FROM permissions
ORDER BY category, name
`);
res.json(result.rows);
} catch (error) {
console.error('Error getting permissions:', error);
res.status(500).json({ error: 'Server error' });
}
});
module.exports = router;

View File

@@ -0,0 +1,89 @@
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(255) NOT NULL UNIQUE,
password VARCHAR(255) NOT NULL,
email VARCHAR UNIQUE,
is_admin BOOLEAN DEFAULT FALSE,
is_active BOOLEAN DEFAULT TRUE,
last_login TIMESTAMP WITH TIME ZONE,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Function to update the updated_at timestamp
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
-- Sequence and defined type for users table if not exists
CREATE SEQUENCE IF NOT EXISTS users_id_seq;
-- Create permissions table
CREATE TABLE IF NOT EXISTS "public"."permissions" (
"id" SERIAL PRIMARY KEY,
"name" varchar NOT NULL UNIQUE,
"code" varchar NOT NULL UNIQUE,
"description" text,
"category" varchar NOT NULL,
"created_at" timestamp with time zone DEFAULT CURRENT_TIMESTAMP,
"updated_at" timestamp with time zone DEFAULT CURRENT_TIMESTAMP
);
-- Create user_permissions junction table
CREATE TABLE IF NOT EXISTS "public"."user_permissions" (
"user_id" int4 NOT NULL REFERENCES "public"."users"("id") ON DELETE CASCADE,
"permission_id" int4 NOT NULL REFERENCES "public"."permissions"("id") ON DELETE CASCADE,
"created_at" timestamp with time zone DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("user_id", "permission_id")
);
-- Add triggers for updated_at on users and permissions
DROP TRIGGER IF EXISTS update_users_updated_at ON users;
CREATE TRIGGER update_users_updated_at
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();
DROP TRIGGER IF EXISTS update_permissions_updated_at ON permissions;
CREATE TRIGGER update_permissions_updated_at
BEFORE UPDATE ON permissions
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();
-- Insert default permissions by page - only the ones used in application
INSERT INTO permissions (name, code, description, category) VALUES
('Dashboard Access', 'access:dashboard', 'Can access the Dashboard page', 'Pages'),
('Products Access', 'access:products', 'Can access the Products page', 'Pages'),
('Categories Access', 'access:categories', 'Can access the Categories page', 'Pages'),
('Vendors Access', 'access:vendors', 'Can access the Vendors page', 'Pages'),
('Analytics Access', 'access:analytics', 'Can access the Analytics page', 'Pages'),
('Forecasting Access', 'access:forecasting', 'Can access the Forecasting page', 'Pages'),
('Purchase Orders Access', 'access:purchase_orders', 'Can access the Purchase Orders page', 'Pages'),
('Import Access', 'access:import', 'Can access the Import page', 'Pages'),
('Settings Access', 'access:settings', 'Can access the Settings page', 'Pages'),
('AI Validation Debug Access', 'access:ai_validation_debug', 'Can access the AI Validation Debug page', 'Pages')
ON CONFLICT (code) DO NOTHING;
-- Settings section permissions
INSERT INTO permissions (name, code, description, category) VALUES
('Data Management', 'settings:data_management', 'Access to the Data Management settings section', 'Settings'),
('Stock Management', 'settings:stock_management', 'Access to the Stock Management settings section', 'Settings'),
('Performance Metrics', 'settings:performance_metrics', 'Access to the Performance Metrics settings section', 'Settings'),
('Calculation Settings', 'settings:calculation_settings', 'Access to the Calculation Settings section', 'Settings'),
('Template Management', 'settings:templates', 'Access to the Template Management settings section', 'Settings'),
('User Management', 'settings:user_management', 'Access to the User Management settings section', 'Settings')
ON CONFLICT (code) DO NOTHING;
-- Set any existing users as admin
UPDATE users SET is_admin = TRUE WHERE is_admin IS NULL;
-- Grant all permissions to admin users
INSERT INTO user_permissions (user_id, permission_id)
SELECT u.id, p.id
FROM users u, permissions p
WHERE u.is_admin = TRUE
ON CONFLICT DO NOTHING;

View File

@@ -35,7 +35,7 @@ global.pool = pool;
app.use(express.json());
app.use(morgan('combined'));
app.use(cors({
origin: ['http://localhost:5175', 'http://localhost:5174', 'https://inventory.kent.pw', 'https://tools.acherryontop.com', 'https://tools.acherryontop.com'],
origin: ['http://localhost:5175', 'http://localhost:5174', 'https://inventory.kent.pw', 'https://acot.site', 'https://tools.acherryontop.com'],
credentials: true
}));

View File

@@ -0,0 +1,45 @@
-- PostgreSQL Database Creation Script for New Server
-- Run as: sudo -u postgres psql -f create-new-database.sql
-- Terminate all connections to the database (if it exists)
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'rocketchat_converted' AND pid <> pg_backend_pid();
-- Drop the database if it exists
DROP DATABASE IF EXISTS rocketchat_converted;
-- Create fresh database
CREATE DATABASE rocketchat_converted;
-- Create user (if not exists) - UPDATE PASSWORD BEFORE RUNNING!
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_user WHERE usename = 'rocketchat_user') THEN
CREATE USER rocketchat_user WITH PASSWORD 'HKjLgt23gWuPXzEAn3rW';
END IF;
END $$;
-- Grant database privileges
GRANT CONNECT ON DATABASE rocketchat_converted TO rocketchat_user;
GRANT CREATE ON DATABASE rocketchat_converted TO rocketchat_user;
-- Connect to the new database
\c rocketchat_converted;
-- Grant schema privileges
GRANT CREATE ON SCHEMA public TO rocketchat_user;
GRANT USAGE ON SCHEMA public TO rocketchat_user;
-- Grant privileges on all future tables and sequences
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO rocketchat_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE, SELECT ON SEQUENCES TO rocketchat_user;
-- Display success message
\echo 'Database created successfully!'
\echo 'IMPORTANT: Update the password for rocketchat_user before proceeding'
\echo 'Next steps:'
\echo '1. Update the password in this file'
\echo '2. Run export-chat-data.sh on your current server'
\echo '3. Transfer the exported files to this server'
\echo '4. Run import-chat-data.sh on this server'

View File

@@ -0,0 +1,881 @@
#!/usr/bin/env python3
"""
MongoDB to PostgreSQL Converter for Rocket.Chat
Converts MongoDB BSON export files to PostgreSQL database
Usage:
python3 mongo_to_postgres_converter.py \
--mongo-path db/database/62df06d44234d20001289144 \
--pg-database rocketchat_converted \
--pg-user rocketchat_user \
--pg-password your_password \
--debug
"""
import json
import os
import re
import subprocess
import sys
import struct
from datetime import datetime
from pathlib import Path
from typing import Dict, Any, List, Optional
import argparse
import traceback
# Auto-install dependencies if needed
try:
import bson
import psycopg2
except ImportError:
print("Installing required packages...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "pymongo", "psycopg2-binary"])
import bson
import psycopg2
class MongoToPostgresConverter:
def __init__(self, mongo_db_path: str, postgres_config: Dict[str, str], debug_mode: bool = False, debug_collections: List[str] = None):
self.mongo_db_path = Path(mongo_db_path)
self.postgres_config = postgres_config
self.debug_mode = debug_mode
self.debug_collections = debug_collections or []
self.collections = {}
self.schema_info = {}
self.error_log = {}
def log_debug(self, message: str, collection: str = None):
"""Log debug messages if debug mode is enabled and collection is in debug list"""
if self.debug_mode and (not self.debug_collections or collection in self.debug_collections):
print(f"DEBUG: {message}")
def log_error(self, collection: str, error_type: str, details: str):
"""Log detailed error information"""
if collection not in self.error_log:
self.error_log[collection] = []
self.error_log[collection].append({
'type': error_type,
'details': details,
'timestamp': datetime.now().isoformat()
})
def sample_documents(self, collection_name: str, max_samples: int = 3) -> List[Dict]:
"""Sample documents from a collection for debugging"""
if not self.debug_mode or (self.debug_collections and collection_name not in self.debug_collections):
return []
print(f"\n🔍 Sampling documents from {collection_name}:")
bson_file = self.collections[collection_name]['bson_file']
if bson_file.stat().st_size == 0:
print(" Collection is empty")
return []
samples = []
try:
with open(bson_file, 'rb') as f:
sample_count = 0
while sample_count < max_samples:
try:
doc_size = int.from_bytes(f.read(4), byteorder='little')
if doc_size <= 0:
break
f.seek(-4, 1)
doc_bytes = f.read(doc_size)
if len(doc_bytes) != doc_size:
break
doc = bson.decode(doc_bytes)
samples.append(doc)
sample_count += 1
print(f" Sample {sample_count} - Keys: {list(doc.keys())}")
# Show a few key fields with their types and truncated values
for key, value in list(doc.items())[:3]:
value_preview = str(value)[:50] + "..." if len(str(value)) > 50 else str(value)
print(f" {key}: {type(value).__name__} = {value_preview}")
if len(doc) > 3:
print(f" ... and {len(doc) - 3} more fields")
print()
except (bson.InvalidBSON, struct.error, OSError) as e:
self.log_error(collection_name, 'document_parsing', str(e))
break
except Exception as e:
self.log_error(collection_name, 'file_reading', str(e))
print(f" Error reading collection: {e}")
return samples
def discover_collections(self):
"""Discover all BSON files and their metadata"""
print("Discovering MongoDB collections...")
for bson_file in self.mongo_db_path.glob("*.bson"):
collection_name = bson_file.stem
metadata_file = bson_file.with_suffix(".metadata.json")
# Read metadata if available
metadata = {}
if metadata_file.exists():
try:
with open(metadata_file, 'r', encoding='utf-8') as f:
metadata = json.load(f)
except (UnicodeDecodeError, json.JSONDecodeError) as e:
print(f"Warning: Could not read metadata for {collection_name}: {e}")
metadata = {}
# Get file size and document count estimate
file_size = bson_file.stat().st_size
doc_count = self._estimate_document_count(bson_file)
self.collections[collection_name] = {
'bson_file': bson_file,
'metadata': metadata,
'file_size': file_size,
'estimated_docs': doc_count
}
print(f"Found {len(self.collections)} collections")
for name, info in self.collections.items():
print(f" - {name}: {info['file_size']/1024/1024:.1f}MB (~{info['estimated_docs']} docs)")
def _estimate_document_count(self, bson_file: Path) -> int:
"""Estimate document count by reading first few documents"""
if bson_file.stat().st_size == 0:
return 0
try:
with open(bson_file, 'rb') as f:
docs_sampled = 0
bytes_sampled = 0
max_sample_size = min(1024 * 1024, bson_file.stat().st_size) # 1MB or file size
while bytes_sampled < max_sample_size:
try:
doc_size = int.from_bytes(f.read(4), byteorder='little')
if doc_size <= 0 or doc_size > 16 * 1024 * 1024: # MongoDB doc size limit
break
f.seek(-4, 1) # Go back
doc_bytes = f.read(doc_size)
if len(doc_bytes) != doc_size:
break
bson.decode(doc_bytes) # Validate it's a valid BSON document
docs_sampled += 1
bytes_sampled += doc_size
except (bson.InvalidBSON, struct.error, OSError):
break
if docs_sampled > 0 and bytes_sampled > 0:
avg_doc_size = bytes_sampled / docs_sampled
return int(bson_file.stat().st_size / avg_doc_size)
except Exception:
pass
return 0
def analyze_schema(self, collection_name: str, sample_size: int = 100) -> Dict[str, Any]:
"""Analyze collection schema by sampling documents"""
print(f"Analyzing schema for {collection_name}...")
bson_file = self.collections[collection_name]['bson_file']
if bson_file.stat().st_size == 0:
return {}
schema = {}
docs_analyzed = 0
try:
with open(bson_file, 'rb') as f:
while docs_analyzed < sample_size:
try:
doc_size = int.from_bytes(f.read(4), byteorder='little')
if doc_size <= 0:
break
f.seek(-4, 1)
doc_bytes = f.read(doc_size)
if len(doc_bytes) != doc_size:
break
doc = bson.decode(doc_bytes)
self._analyze_document_schema(doc, schema)
docs_analyzed += 1
except (bson.InvalidBSON, struct.error, OSError):
break
except Exception as e:
print(f"Error analyzing {collection_name}: {e}")
self.schema_info[collection_name] = schema
return schema
def _analyze_document_schema(self, doc: Dict[str, Any], schema: Dict[str, Any], prefix: str = ""):
"""Recursively analyze document structure"""
for key, value in doc.items():
full_key = f"{prefix}.{key}" if prefix else key
if full_key not in schema:
schema[full_key] = {
'types': set(),
'null_count': 0,
'total_count': 0,
'is_array': False,
'nested_schema': {}
}
schema[full_key]['total_count'] += 1
if value is None:
schema[full_key]['null_count'] += 1
schema[full_key]['types'].add('null')
elif isinstance(value, dict):
schema[full_key]['types'].add('object')
if 'nested_schema' not in schema[full_key]:
schema[full_key]['nested_schema'] = {}
self._analyze_document_schema(value, schema[full_key]['nested_schema'])
elif isinstance(value, list):
schema[full_key]['types'].add('array')
schema[full_key]['is_array'] = True
if value and isinstance(value[0], dict):
if 'array_item_schema' not in schema[full_key]:
schema[full_key]['array_item_schema'] = {}
for item in value[:5]: # Sample first 5 items
if isinstance(item, dict):
self._analyze_document_schema(item, schema[full_key]['array_item_schema'])
else:
schema[full_key]['types'].add(type(value).__name__)
def generate_postgres_schema(self) -> Dict[str, str]:
"""Generate PostgreSQL CREATE TABLE statements"""
print("Generating PostgreSQL schema...")
table_definitions = {}
for collection_name, schema in self.schema_info.items():
if not schema: # Empty collection
continue
table_name = self._sanitize_table_name(collection_name)
columns = []
# Always add an id column (PostgreSQL doesn't use _id like MongoDB)
columns.append("id SERIAL PRIMARY KEY")
for field_name, field_info in schema.items():
if field_name == '_id':
columns.append("mongo_id TEXT") # Always allow NULL for mongo_id
continue
col_name = self._sanitize_column_name(field_name)
# Handle conflicts with PostgreSQL auto-generated columns
if col_name in ['id', 'mongo_id', 'created_at', 'updated_at']:
col_name = f"field_{col_name}"
col_type = self._determine_postgres_type(field_info)
# Make all fields nullable by default to avoid constraint violations
columns.append(f"{col_name} {col_type}")
# Add metadata columns
columns.extend([
"created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP",
"updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"
])
column_definitions = ',\n '.join(columns)
table_sql = f"""
CREATE TABLE IF NOT EXISTS {table_name} (
{column_definitions}
);
-- Create indexes based on MongoDB indexes
"""
# Get list of actual columns that will exist in the table
existing_columns = set(['id', 'mongo_id', 'created_at', 'updated_at'])
for field_name in schema.keys():
if field_name != '_id':
col_name = self._sanitize_column_name(field_name)
# Handle conflicts with PostgreSQL auto-generated columns
if col_name in ['id', 'mongo_id', 'created_at', 'updated_at']:
col_name = f"field_{col_name}"
existing_columns.add(col_name)
# Add indexes from MongoDB metadata
metadata = self.collections[collection_name].get('metadata', {})
indexes = metadata.get('indexes', [])
for index in indexes:
if index['name'] != '_id_': # Skip the default _id index
# Sanitize index name - remove special characters
sanitized_index_name = re.sub(r'[^a-zA-Z0-9_]', '_', index['name'])
index_name = f"idx_{table_name}_{sanitized_index_name}"
index_keys = list(index['key'].keys())
if index_keys:
sanitized_keys = []
for key in index_keys:
if key != '_id':
sanitized_key = self._sanitize_column_name(key)
# Handle conflicts with PostgreSQL auto-generated columns
if sanitized_key in ['id', 'mongo_id', 'created_at', 'updated_at']:
sanitized_key = f"field_{sanitized_key}"
# Only add if the column actually exists in our table
if sanitized_key in existing_columns:
sanitized_keys.append(sanitized_key)
if sanitized_keys:
table_sql += f"CREATE INDEX IF NOT EXISTS {index_name} ON {table_name} ({', '.join(sanitized_keys)});\n"
table_definitions[collection_name] = table_sql
return table_definitions
def _sanitize_table_name(self, name: str) -> str:
"""Convert MongoDB collection name to PostgreSQL table name"""
# Remove rocketchat_ prefix if present
if name.startswith('rocketchat_'):
name = name[11:]
# Replace special characters with underscores
name = re.sub(r'[^a-zA-Z0-9_]', '_', name)
# Ensure it starts with a letter
if name and name[0].isdigit():
name = 'table_' + name
return name.lower()
def _sanitize_column_name(self, name: str) -> str:
"""Convert MongoDB field name to PostgreSQL column name"""
# Handle nested field names (convert dots to underscores)
name = name.replace('.', '_')
# Replace special characters with underscores
name = re.sub(r'[^a-zA-Z0-9_]', '_', name)
# Ensure it starts with a letter or underscore
if name and name[0].isdigit():
name = 'col_' + name
# Handle PostgreSQL reserved words
reserved = {
'user', 'order', 'group', 'table', 'index', 'key', 'value', 'date', 'time', 'timestamp',
'default', 'select', 'from', 'where', 'insert', 'update', 'delete', 'create', 'drop',
'alter', 'grant', 'revoke', 'commit', 'rollback', 'begin', 'end', 'case', 'when',
'then', 'else', 'if', 'null', 'not', 'and', 'or', 'in', 'exists', 'between',
'like', 'limit', 'offset', 'union', 'join', 'inner', 'outer', 'left', 'right',
'full', 'cross', 'natural', 'on', 'using', 'distinct', 'all', 'any', 'some',
'desc', 'asc', 'primary', 'foreign', 'references', 'constraint', 'unique',
'check', 'cascade', 'restrict', 'action', 'match', 'partial', 'full'
}
if name.lower() in reserved:
name = name + '_col'
return name.lower()
def _determine_postgres_type(self, field_info: Dict[str, Any]) -> str:
"""Determine PostgreSQL column type from MongoDB field analysis with improved logic"""
types = field_info['types']
# Convert set to list for easier checking
type_list = list(types)
# If there's only one type (excluding null), use specific typing
non_null_types = [t for t in type_list if t != 'null']
if len(non_null_types) == 1:
single_type = non_null_types[0]
if single_type == 'bool':
return 'BOOLEAN'
elif single_type == 'int':
return 'INTEGER'
elif single_type == 'float':
return 'NUMERIC'
elif single_type == 'str':
return 'TEXT'
elif single_type == 'datetime':
return 'TIMESTAMP'
elif single_type == 'ObjectId':
return 'TEXT'
# Handle mixed types more conservatively
if 'array' in types or field_info.get('is_array', False):
return 'JSONB' # Arrays always go to JSONB
elif 'object' in types:
return 'JSONB' # Objects always go to JSONB
elif len(non_null_types) > 1:
# Multiple non-null types - check for common combinations
if set(non_null_types) <= {'int', 'float'}:
return 'NUMERIC' # Can handle both int and float
elif set(non_null_types) <= {'bool', 'str'}:
return 'TEXT' # Convert everything to text
elif set(non_null_types) <= {'str', 'ObjectId'}:
return 'TEXT' # Both are string-like
else:
return 'JSONB' # Complex mixed types go to JSONB
elif 'ObjectId' in types:
return 'TEXT'
elif 'datetime' in types:
return 'TIMESTAMP'
elif 'bool' in types:
return 'BOOLEAN'
elif 'int' in types:
return 'INTEGER'
elif 'float' in types:
return 'NUMERIC'
elif 'str' in types:
return 'TEXT'
else:
return 'TEXT' # Default fallback
def create_postgres_database(self, table_definitions: Dict[str, str]):
"""Create PostgreSQL database and tables"""
print("Creating PostgreSQL database schema...")
try:
# Connect to PostgreSQL
conn = psycopg2.connect(**self.postgres_config)
conn.autocommit = True
cursor = conn.cursor()
# Create tables
for collection_name, table_sql in table_definitions.items():
print(f"Creating table for {collection_name}...")
cursor.execute(table_sql)
cursor.close()
conn.close()
print("Database schema created successfully!")
except Exception as e:
print(f"Error creating database schema: {e}")
raise
def convert_and_insert_data(self, batch_size: int = 1000):
"""Convert BSON data and insert into PostgreSQL"""
print("Converting and inserting data...")
try:
conn = psycopg2.connect(**self.postgres_config)
conn.autocommit = False
for collection_name in self.collections:
print(f"Processing {collection_name}...")
self._convert_collection(conn, collection_name, batch_size)
conn.close()
print("Data conversion completed successfully!")
except Exception as e:
print(f"Error converting data: {e}")
raise
def _convert_collection(self, conn, collection_name: str, batch_size: int):
"""Convert a single collection"""
bson_file = self.collections[collection_name]['bson_file']
if bson_file.stat().st_size == 0:
print(f" Skipping empty collection {collection_name}")
return
table_name = self._sanitize_table_name(collection_name)
cursor = conn.cursor()
batch = []
total_inserted = 0
errors = 0
try:
with open(bson_file, 'rb') as f:
while True:
try:
doc_size = int.from_bytes(f.read(4), byteorder='little')
if doc_size <= 0:
break
f.seek(-4, 1)
doc_bytes = f.read(doc_size)
if len(doc_bytes) != doc_size:
break
doc = bson.decode(doc_bytes)
batch.append(doc)
if len(batch) >= batch_size:
inserted, batch_errors = self._insert_batch(cursor, table_name, batch, collection_name)
total_inserted += inserted
errors += batch_errors
batch = []
conn.commit()
if total_inserted % 5000 == 0: # Less frequent progress updates
print(f" Inserted {total_inserted} documents...")
except (bson.InvalidBSON, struct.error, OSError):
break
# Insert remaining documents
if batch:
inserted, batch_errors = self._insert_batch(cursor, table_name, batch, collection_name)
total_inserted += inserted
errors += batch_errors
conn.commit()
if errors > 0:
print(f" Completed {collection_name}: {total_inserted} documents inserted ({errors} errors)")
else:
print(f" Completed {collection_name}: {total_inserted} documents inserted")
except Exception as e:
print(f" Error processing {collection_name}: {e}")
conn.rollback()
finally:
cursor.close()
def _insert_batch(self, cursor, table_name: str, documents: List[Dict], collection_name: str):
"""Insert a batch of documents with proper transaction handling"""
if not documents:
return 0, 0
# Get schema info for this collection
schema = self.schema_info.get(collection_name, {})
# Build column list
columns = ['mongo_id']
for field_name in schema.keys():
if field_name != '_id':
col_name = self._sanitize_column_name(field_name)
# Handle conflicts with PostgreSQL auto-generated columns
if col_name in ['id', 'mongo_id', 'created_at', 'updated_at']:
col_name = f"field_{col_name}"
columns.append(col_name)
# Build INSERT statement
placeholders = ', '.join(['%s'] * len(columns))
sql = f"INSERT INTO {table_name} ({', '.join(columns)}) VALUES ({placeholders})"
self.log_debug(f"SQL: {sql}", collection_name)
# Convert documents to tuples
rows = []
errors = 0
for doc_idx, doc in enumerate(documents):
try:
row = []
# Add mongo_id
row.append(str(doc.get('_id', '')))
# Add other fields
for field_name in schema.keys():
if field_name != '_id':
try:
value = self._get_nested_value(doc, field_name)
converted_value = self._convert_value_for_postgres(value, field_name, schema)
row.append(converted_value)
except Exception as e:
self.log_error(collection_name, 'field_conversion',
f"Field '{field_name}' in doc {doc_idx}: {str(e)}")
# Only show debug for collections we're focusing on
if collection_name in self.debug_collections:
print(f" ⚠️ Error converting field '{field_name}': {e}")
row.append(None) # Use NULL for problematic fields
rows.append(tuple(row))
except Exception as e:
self.log_error(collection_name, 'document_conversion', f"Document {doc_idx}: {str(e)}")
errors += 1
continue
# Execute batch insert
if rows:
try:
cursor.executemany(sql, rows)
return len(rows), errors
except Exception as batch_error:
self.log_error(collection_name, 'batch_insert', str(batch_error))
# Only show detailed debugging for targeted collections
if collection_name in self.debug_collections:
print(f" 🔴 Batch insert failed for {collection_name}: {batch_error}")
print(" Trying individual inserts with rollback handling...")
# Rollback the failed transaction
cursor.connection.rollback()
# Try inserting one by one in individual transactions
success_count = 0
for row_idx, row in enumerate(rows):
try:
cursor.execute(sql, row)
cursor.connection.commit() # Commit each successful insert
success_count += 1
except Exception as row_error:
cursor.connection.rollback() # Rollback failed insert
self.log_error(collection_name, 'row_insert', f"Row {row_idx}: {str(row_error)}")
# Show detailed error only for the first few failures and only for targeted collections
if collection_name in self.debug_collections and errors < 3:
print(f" Row {row_idx} failed: {row_error}")
print(f" Row data: {len(row)} values, expected {len(columns)} columns")
errors += 1
continue
return success_count, errors
return 0, errors
def _get_nested_value(self, doc: Dict, field_path: str):
"""Get value from nested document using dot notation"""
keys = field_path.split('.')
value = doc
for key in keys:
if isinstance(value, dict) and key in value:
value = value[key]
else:
return None
return value
def _convert_value_for_postgres(self, value, field_name: str = None, schema: Dict = None):
"""Convert MongoDB value to PostgreSQL compatible value with schema-aware conversion"""
if value is None:
return None
# Get the expected PostgreSQL type for this field if available
expected_type = None
if schema and field_name and field_name in schema:
field_info = schema[field_name]
expected_type = self._determine_postgres_type(field_info)
# Handle conversion based on expected type
if expected_type == 'BOOLEAN':
if isinstance(value, bool):
return value
elif isinstance(value, str):
return value.lower() in ('true', '1', 'yes', 'on')
elif isinstance(value, (int, float)):
return bool(value)
else:
return None
elif expected_type == 'INTEGER':
if isinstance(value, int):
return value
elif isinstance(value, float):
return int(value)
elif isinstance(value, str) and value.isdigit():
return int(value)
elif isinstance(value, bool):
return int(value)
else:
return None
elif expected_type == 'NUMERIC':
if isinstance(value, (int, float)):
return value
elif isinstance(value, str):
try:
return float(value)
except ValueError:
return None
elif isinstance(value, bool):
return float(value)
else:
return None
elif expected_type == 'TEXT':
if isinstance(value, str):
return value
elif value is not None:
str_value = str(value)
# Handle very long strings
if len(str_value) > 65535:
return str_value[:65535]
return str_value
else:
return None
elif expected_type == 'TIMESTAMP':
if hasattr(value, 'isoformat'):
return value.isoformat()
elif isinstance(value, str):
return value
else:
return str(value) if value is not None else None
elif expected_type == 'JSONB':
if isinstance(value, (dict, list)):
return json.dumps(value, default=self._json_serializer)
elif isinstance(value, str):
# Check if it's already valid JSON
try:
json.loads(value)
return value
except (json.JSONDecodeError, TypeError):
# Not valid JSON, wrap it
return json.dumps(value)
else:
return json.dumps(value, default=self._json_serializer)
# Fallback to original logic if no expected type or type not recognized
if isinstance(value, bool):
return value
elif isinstance(value, (int, float)):
return value
elif isinstance(value, str):
return value
elif isinstance(value, (dict, list)):
return json.dumps(value, default=self._json_serializer)
elif hasattr(value, 'isoformat'): # datetime
return value.isoformat()
elif hasattr(value, '__str__'):
str_value = str(value)
if len(str_value) > 65535:
return str_value[:65535]
return str_value
else:
return str(value)
def _json_serializer(self, obj):
"""Custom JSON serializer for complex objects with better error handling"""
try:
if hasattr(obj, 'isoformat'): # datetime
return obj.isoformat()
elif hasattr(obj, '__str__'):
return str(obj)
else:
return None
except Exception as e:
self.log_debug(f"JSON serialization error: {e}")
return str(obj)
def run_conversion(self, sample_size: int = 100, batch_size: int = 1000):
"""Run the full conversion process with focused debugging"""
print("Starting MongoDB to PostgreSQL conversion...")
print("This will convert your Rocket.Chat database from MongoDB to PostgreSQL")
if self.debug_mode:
if self.debug_collections:
print(f"🐛 DEBUG MODE: Focusing on collections: {', '.join(self.debug_collections)}")
else:
print("🐛 DEBUG MODE: All collections")
print("=" * 70)
# Step 1: Discover collections
self.discover_collections()
# Step 2: Analyze schemas
print("\nAnalyzing collection schemas...")
for collection_name in self.collections:
self.analyze_schema(collection_name, sample_size)
# Sample problematic collections if debugging
if self.debug_mode and self.debug_collections:
for coll in self.debug_collections:
if coll in self.collections:
self.sample_documents(coll, 2)
# Step 3: Generate PostgreSQL schema
table_definitions = self.generate_postgres_schema()
# Step 4: Create database schema
self.create_postgres_database(table_definitions)
# Step 5: Convert and insert data
self.convert_and_insert_data(batch_size)
# Step 6: Show error summary
self._print_error_summary()
print("=" * 70)
print("✅ Conversion completed!")
print(f" Database: {self.postgres_config['database']}")
print(f" Tables created: {len(table_definitions)}")
def _print_error_summary(self):
"""Print a focused summary of errors"""
if not self.error_log:
print("\n✅ No errors encountered during conversion!")
return
print("\n⚠️ ERROR SUMMARY:")
print("=" * 50)
# Sort by error count descending
sorted_collections = sorted(self.error_log.items(),
key=lambda x: len(x[1]), reverse=True)
for collection, errors in sorted_collections:
error_types = {}
for error in errors:
error_type = error['type']
if error_type not in error_types:
error_types[error_type] = []
error_types[error_type].append(error['details'])
print(f"\n🔴 {collection} ({len(errors)} total errors):")
for error_type, details_list in error_types.items():
print(f" {error_type}: {len(details_list)} errors")
# Show sample errors for critical collections
if collection in ['rocketchat_settings', 'rocketchat_room'] and len(details_list) > 0:
print(f" Sample: {details_list[0][:100]}...")
def main():
parser = argparse.ArgumentParser(
description='Convert MongoDB BSON export to PostgreSQL',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Basic usage
python3 mongo_to_postgres_converter.py \\
--mongo-path db/database/62df06d44234d20001289144 \\
--pg-database rocketchat_converted \\
--pg-user rocketchat_user \\
--pg-password mypassword
# Debug specific failing collections
python3 mongo_to_postgres_converter.py \\
--mongo-path db/database/62df06d44234d20001289144 \\
--pg-database rocketchat_converted \\
--pg-user rocketchat_user \\
--pg-password mypassword \\
--debug-collections rocketchat_settings rocketchat_room
Before running this script:
1. Run: sudo -u postgres psql -f reset_database.sql
2. Update the password in reset_database.sql
"""
)
parser.add_argument('--mongo-path', required=True, help='Path to MongoDB export directory')
parser.add_argument('--pg-host', default='localhost', help='PostgreSQL host (default: localhost)')
parser.add_argument('--pg-port', default='5432', help='PostgreSQL port (default: 5432)')
parser.add_argument('--pg-database', required=True, help='PostgreSQL database name')
parser.add_argument('--pg-user', required=True, help='PostgreSQL username')
parser.add_argument('--pg-password', required=True, help='PostgreSQL password')
parser.add_argument('--sample-size', type=int, default=100, help='Number of documents to sample for schema analysis (default: 100)')
parser.add_argument('--batch-size', type=int, default=1000, help='Batch size for data insertion (default: 1000)')
parser.add_argument('--debug', action='store_true', help='Enable debug mode with detailed error logging')
parser.add_argument('--debug-collections', nargs='*', help='Specific collections to debug (e.g., rocketchat_settings rocketchat_room)')
args = parser.parse_args()
postgres_config = {
'host': args.pg_host,
'port': args.pg_port,
'database': args.pg_database,
'user': args.pg_user,
'password': args.pg_password
}
# Enable debug mode if debug collections are specified
debug_mode = args.debug or (args.debug_collections is not None)
converter = MongoToPostgresConverter(args.mongo_path, postgres_config, debug_mode, args.debug_collections)
converter.run_conversion(args.sample_size, args.batch_size)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,41 @@
-- PostgreSQL Database Reset Script for Rocket.Chat Import
-- Run as: sudo -u postgres psql -f reset_database.sql
-- Terminate all connections to the database (force disconnect users)
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'rocketchat_converted' AND pid <> pg_backend_pid();
-- Drop the database if it exists
DROP DATABASE IF EXISTS rocketchat_converted;
-- Create fresh database
CREATE DATABASE rocketchat_converted;
-- Create user (if not exists)
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_user WHERE usename = 'rocketchat_user') THEN
CREATE USER rocketchat_user WITH PASSWORD 'HKjLgt23gWuPXzEAn3rW';
END IF;
END $$;
-- Grant database privileges
GRANT CONNECT ON DATABASE rocketchat_converted TO rocketchat_user;
GRANT CREATE ON DATABASE rocketchat_converted TO rocketchat_user;
-- Connect to the new database
\c rocketchat_converted;
-- Grant schema privileges
GRANT CREATE ON SCHEMA public TO rocketchat_user;
GRANT USAGE ON SCHEMA public TO rocketchat_user;
-- Grant privileges on all future tables and sequences
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO rocketchat_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE, SELECT ON SEQUENCES TO rocketchat_user;
-- Display success message
\echo 'Database reset completed successfully!'
\echo 'You can now run the converter with:'
\echo 'python3 mongo_to_postgres_converter.py --mongo-path db/database/62df06d44234d20001289144 --pg-database rocketchat_converted --pg-user rocketchat_user --pg-password your_password'

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env python3
"""
Quick test script to verify the converter fixes work for problematic collections
"""
from mongo_to_postgres_converter import MongoToPostgresConverter
def test_problematic_collections():
print("🧪 Testing converter fixes for problematic collections...")
postgres_config = {
'host': 'localhost',
'port': '5432',
'database': 'rocketchat_test',
'user': 'rocketchat_user',
'password': 'password123'
}
converter = MongoToPostgresConverter(
'db/database/62df06d44234d20001289144',
postgres_config,
debug_mode=True,
debug_collections=['rocketchat_settings', 'rocketchat_room']
)
# Test just discovery and schema analysis
print("\n1. Testing collection discovery...")
converter.discover_collections()
print("\n2. Testing schema analysis...")
if 'rocketchat_settings' in converter.collections:
settings_schema = converter.analyze_schema('rocketchat_settings', 10)
print(f"Settings schema fields: {len(settings_schema)}")
# Check specific problematic fields
if 'packageValue' in settings_schema:
packagevalue_info = settings_schema['packageValue']
pg_type = converter._determine_postgres_type(packagevalue_info)
print(f"packageValue types: {packagevalue_info['types']} -> PostgreSQL: {pg_type}")
if 'rocketchat_room' in converter.collections:
room_schema = converter.analyze_schema('rocketchat_room', 10)
print(f"Room schema fields: {len(room_schema)}")
# Check specific problematic fields
if 'sysMes' in room_schema:
sysmes_info = room_schema['sysMes']
pg_type = converter._determine_postgres_type(sysmes_info)
print(f"sysMes types: {sysmes_info['types']} -> PostgreSQL: {pg_type}")
print("\n✅ Test completed - check the type mappings above!")
if __name__ == '__main__':
test_problematic_collections()

View File

@@ -0,0 +1,147 @@
#!/bin/bash
# Chat Database Export Script
# This script exports the chat database schema and data for migration
set -e # Exit on any error
echo "🚀 Starting chat database export..."
# Configuration - Update these values for your setup
DB_HOST="${CHAT_DB_HOST:-localhost}"
DB_PORT="${CHAT_DB_PORT:-5432}"
DB_NAME="${CHAT_DB_NAME:-rocketchat_converted}"
DB_USER="${CHAT_DB_USER:-rocketchat_user}"
# Check if database connection info is available
if [ -z "$CHAT_DB_PASSWORD" ]; then
echo "⚠️ CHAT_DB_PASSWORD environment variable not set"
echo "Please set it with: export CHAT_DB_PASSWORD='your_password'"
exit 1
fi
echo "📊 Database: $DB_NAME on $DB_HOST:$DB_PORT"
# Create export directory
EXPORT_DIR="chat-migration-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$EXPORT_DIR"
echo "📁 Export directory: $EXPORT_DIR"
# Export database schema
echo "📋 Exporting database schema..."
PGPASSWORD="$CHAT_DB_PASSWORD" pg_dump \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
--schema-only \
--no-owner \
--no-privileges \
-f "$EXPORT_DIR/chat-schema.sql"
if [ $? -eq 0 ]; then
echo "✅ Schema exported successfully"
else
echo "❌ Schema export failed"
exit 1
fi
# Export database data
echo "💾 Exporting database data..."
PGPASSWORD="$CHAT_DB_PASSWORD" pg_dump \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
--data-only \
--no-owner \
--no-privileges \
--disable-triggers \
--column-inserts \
-f "$EXPORT_DIR/chat-data.sql"
if [ $? -eq 0 ]; then
echo "✅ Data exported successfully"
else
echo "❌ Data export failed"
exit 1
fi
# Export file uploads and avatars
echo "📎 Exporting chat files (uploads and avatars)..."
if [ -d "db-convert/db/files" ]; then
cd db-convert/db
tar -czf "../../$EXPORT_DIR/chat-files.tar.gz" files/
cd ../..
echo "✅ Files exported successfully"
else
echo "⚠️ No files directory found at db-convert/db/files"
echo " This is normal if you have no file uploads"
touch "$EXPORT_DIR/chat-files.tar.gz"
fi
# Get table statistics for verification
echo "📈 Generating export statistics..."
PGPASSWORD="$CHAT_DB_PASSWORD" psql \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
-c "
SELECT
schemaname,
tablename,
n_tup_ins as inserted_rows,
n_tup_upd as updated_rows,
n_tup_del as deleted_rows,
n_live_tup as live_rows,
n_dead_tup as dead_rows
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
" > "$EXPORT_DIR/table-stats.txt"
# Create export summary
cat > "$EXPORT_DIR/export-summary.txt" << EOF
Chat Database Export Summary
===========================
Export Date: $(date)
Database: $DB_NAME
Host: $DB_HOST:$DB_PORT
User: $DB_USER
Files Generated:
- chat-schema.sql: Database schema (tables, indexes, constraints)
- chat-data.sql: All table data
- chat-files.tar.gz: Uploaded files and avatars
- table-stats.txt: Database statistics
- export-summary.txt: This summary
Next Steps:
1. Transfer these files to your new server
2. Run create-new-database.sql on the new server first
3. Run import-chat-data.sh on the new server
4. Update your application configuration
5. Run verify-migration.js to validate the migration
Important Notes:
- Keep these files secure as they contain your chat data
- Ensure the new server has enough disk space
- Plan for application downtime during the migration
EOF
echo ""
echo "🎉 Export completed successfully!"
echo "📁 Files are in: $EXPORT_DIR/"
echo ""
echo "📋 Export Summary:"
ls -lh "$EXPORT_DIR/"
echo ""
echo "🚚 Next steps:"
echo "1. Transfer the $EXPORT_DIR/ directory to your new server"
echo "2. Run create-new-database.sql on the new server (update password first!)"
echo "3. Run import-chat-data.sh on the new server"
echo ""
echo "💡 To transfer files to new server:"
echo " scp -r $EXPORT_DIR/ user@new-server:/tmp/"

View File

@@ -0,0 +1,167 @@
#!/bin/bash
# Chat Database Import Script
# This script imports the chat database schema and data on the new server
set -e # Exit on any error
echo "🚀 Starting chat database import..."
# Configuration - Update these values for your new server
DB_HOST="${CHAT_DB_HOST:-localhost}"
DB_PORT="${CHAT_DB_PORT:-5432}"
DB_NAME="${CHAT_DB_NAME:-rocketchat_converted}"
DB_USER="${CHAT_DB_USER:-rocketchat_user}"
# Check if database connection info is available
if [ -z "$CHAT_DB_PASSWORD" ]; then
echo "⚠️ CHAT_DB_PASSWORD environment variable not set"
echo "Please set it with: export CHAT_DB_PASSWORD='your_password'"
exit 1
fi
# Find the migration directory
MIGRATION_DIR=""
if [ -d "/tmp" ]; then
MIGRATION_DIR=$(find /tmp -maxdepth 1 -name "chat-migration-*" -type d | head -1)
fi
if [ -z "$MIGRATION_DIR" ]; then
echo "❌ No migration directory found in /tmp/"
echo "Please specify the migration directory:"
read -p "Enter full path to migration directory: " MIGRATION_DIR
fi
if [ ! -d "$MIGRATION_DIR" ]; then
echo "❌ Migration directory not found: $MIGRATION_DIR"
exit 1
fi
echo "📁 Using migration directory: $MIGRATION_DIR"
echo "📊 Target database: $DB_NAME on $DB_HOST:$DB_PORT"
# Verify required files exist
REQUIRED_FILES=("chat-schema.sql" "chat-data.sql" "chat-files.tar.gz")
for file in "${REQUIRED_FILES[@]}"; do
if [ ! -f "$MIGRATION_DIR/$file" ]; then
echo "❌ Required file not found: $MIGRATION_DIR/$file"
exit 1
fi
done
echo "✅ All required files found"
# Test database connection
echo "🔗 Testing database connection..."
PGPASSWORD="$CHAT_DB_PASSWORD" psql \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
-c "SELECT version();" > /dev/null
if [ $? -eq 0 ]; then
echo "✅ Database connection successful"
else
echo "❌ Database connection failed"
echo "Please ensure:"
echo " 1. PostgreSQL is running"
echo " 2. Database '$DB_NAME' exists"
echo " 3. User '$DB_USER' has access"
echo " 4. Password is correct"
exit 1
fi
# Import database schema
echo "📋 Importing database schema..."
PGPASSWORD="$CHAT_DB_PASSWORD" psql \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
-f "$MIGRATION_DIR/chat-schema.sql"
if [ $? -eq 0 ]; then
echo "✅ Schema imported successfully"
else
echo "❌ Schema import failed"
exit 1
fi
# Import database data
echo "💾 Importing database data..."
echo " This may take a while depending on data size..."
PGPASSWORD="$CHAT_DB_PASSWORD" psql \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
-f "$MIGRATION_DIR/chat-data.sql"
if [ $? -eq 0 ]; then
echo "✅ Data imported successfully"
else
echo "❌ Data import failed"
echo "Check the error messages above for details"
exit 1
fi
# Create files directory and import files
echo "📎 Setting up files directory..."
mkdir -p "db-convert/db"
if [ -s "$MIGRATION_DIR/chat-files.tar.gz" ]; then
echo "📂 Extracting chat files..."
cd db-convert/db
tar -xzf "$MIGRATION_DIR/chat-files.tar.gz"
cd ../..
# Set proper permissions
if [ -d "db-convert/db/files" ]; then
chmod -R 755 db-convert/db/files
echo "✅ Files imported and permissions set"
else
echo "⚠️ Files directory not created properly"
fi
else
echo " No files to import (empty archive)"
mkdir -p "db-convert/db/files/uploads"
mkdir -p "db-convert/db/files/avatars"
fi
# Get final table statistics
echo "📈 Generating import statistics..."
PGPASSWORD="$CHAT_DB_PASSWORD" psql \
-h "$DB_HOST" \
-p "$DB_PORT" \
-U "$DB_USER" \
-d "$DB_NAME" \
-c "
SELECT
tablename,
n_live_tup as row_count
FROM pg_stat_user_tables
WHERE schemaname = 'public'
ORDER BY n_live_tup DESC;
"
# Create import summary
echo ""
echo "🎉 Import completed successfully!"
echo ""
echo "📋 Import Summary:"
echo " Database: $DB_NAME"
echo " Host: $DB_HOST:$DB_PORT"
echo " Files location: $(pwd)/db-convert/db/files/"
echo ""
echo "🔍 Next steps:"
echo "1. Update your application configuration to use this database"
echo "2. Run verify-migration.js to validate the migration"
echo "3. Test your application thoroughly"
echo "4. Update DNS/load balancer to point to new server"
echo ""
echo "⚠️ Important:"
echo "- Keep the original data as backup until migration is fully validated"
echo "- Monitor the application closely after switching"
echo "- Have a rollback plan ready"

View File

@@ -0,0 +1,86 @@
# Chat Database Migration Guide
This guide will help you migrate your chat database from the current server to a new PostgreSQL server.
## Overview
Your chat system uses:
- Database: `rocketchat_converted` (PostgreSQL)
- Main tables: users, message, room, uploads, avatars, subscription
- File storage: db-convert/db/files/ directory with uploads and avatars
- Environment configuration for database connection
## Migration Steps
### 1. Pre-Migration Setup
On your **new server**, ensure PostgreSQL is installed and running:
```bash
# Install PostgreSQL (if not already done)
sudo apt update
sudo apt install postgresql postgresql-contrib
# Start PostgreSQL service
sudo systemctl start postgresql
sudo systemctl enable postgresql
```
### 2. Create Database Schema on New Server
Run the provided migration script:
```bash
# On new server
sudo -u postgres psql -f create-new-database.sql
```
### 3. Export Data from Current Server
Run the export script:
```bash
# On current server
./export-chat-data.sh
```
This will create:
- `chat-schema.sql` - Database schema
- `chat-data.sql` - All table data
- `chat-files.tar.gz` - All uploaded files and avatars
### 4. Transfer Data to New Server
```bash
# Copy files to new server
scp chat-schema.sql chat-data.sql chat-files.tar.gz user@new-server:/tmp/
```
### 5. Import Data on New Server
```bash
# On new server
./import-chat-data.sh
```
### 6. Update Configuration
Update your environment variables to point to the new database server.
### 7. Verify Migration
Run the verification script to ensure everything transferred correctly:
```bash
node verify-migration.js
```
## Files Provided
1. `create-new-database.sql` - Creates database and user on new server
2. `export-chat-data.sh` - Exports data from current server
3. `import-chat-data.sh` - Imports data to new server
4. `verify-migration.js` - Verifies data integrity
5. `update-config-template.env` - Template for new configuration
## Important Notes
- **Backup first**: Always backup your current database before migration
- **Downtime**: Plan for application downtime during migration
- **File permissions**: Ensure file permissions are preserved during transfer
- **Network access**: Ensure new server can accept connections from your application

1446
inventory-server/chat/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
{
"name": "chat-server",
"version": "1.0.0",
"description": "Chat archive server for Rocket.Chat data",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js"
},
"dependencies": {
"express": "^4.18.2",
"cors": "^2.8.5",
"pg": "^8.11.0",
"dotenv": "^16.0.3",
"morgan": "^1.10.0"
},
"devDependencies": {
"nodemon": "^2.0.22"
}
}

View File

@@ -0,0 +1,649 @@
const express = require('express');
const path = require('path');
const router = express.Router();
// Serve uploaded files with proper mapping from database paths to actual file locations
router.get('/files/uploads/*', async (req, res) => {
try {
// Extract the path from the URL (everything after /files/uploads/)
const requestPath = req.params[0];
// The URL path will be like: ufs/AmazonS3:Uploads/274Mf9CyHNG72oF86/filename.jpg
// We need to extract the mongo_id (274Mf9CyHNG72oF86) from this path
const pathParts = requestPath.split('/');
let mongoId = null;
// Find the mongo_id in the path structure
for (let i = 0; i < pathParts.length; i++) {
if (pathParts[i].includes('AmazonS3:Uploads') && i + 1 < pathParts.length) {
mongoId = pathParts[i + 1];
break;
}
// Sometimes the mongo_id might be the last part of ufs/AmazonS3:Uploads/mongoId
if (pathParts[i] === 'AmazonS3:Uploads' && i + 1 < pathParts.length) {
mongoId = pathParts[i + 1];
break;
}
}
if (!mongoId) {
// Try to get mongo_id from database by matching the full path
const result = await global.pool.query(`
SELECT mongo_id, name, type
FROM uploads
WHERE path = $1 OR url = $1
LIMIT 1
`, [`/ufs/AmazonS3:Uploads/${requestPath}`, `/ufs/AmazonS3:Uploads/${requestPath}`]);
if (result.rows.length > 0) {
mongoId = result.rows[0].mongo_id;
}
}
if (!mongoId) {
return res.status(404).json({ error: 'File not found' });
}
// The actual file is stored with just the mongo_id as filename
const filePath = path.join(__dirname, 'db-convert/db/files/uploads', mongoId);
// Get file info from database for proper content-type
const fileInfo = await global.pool.query(`
SELECT name, type
FROM uploads
WHERE mongo_id = $1
LIMIT 1
`, [mongoId]);
if (fileInfo.rows.length === 0) {
return res.status(404).json({ error: 'File metadata not found' });
}
const { name, type } = fileInfo.rows[0];
// Set proper content type
if (type) {
res.set('Content-Type', type);
}
// Set content disposition with original filename
if (name) {
res.set('Content-Disposition', `inline; filename="${name}"`);
}
// Send the file
res.sendFile(filePath, (err) => {
if (err) {
console.error('Error serving file:', err);
if (!res.headersSent) {
res.status(404).json({ error: 'File not found on disk' });
}
}
});
} catch (error) {
console.error('Error serving upload:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Also serve files directly by mongo_id for simpler access
router.get('/files/by-id/:mongoId', async (req, res) => {
try {
const { mongoId } = req.params;
// Get file info from database
const fileInfo = await global.pool.query(`
SELECT name, type
FROM uploads
WHERE mongo_id = $1
LIMIT 1
`, [mongoId]);
if (fileInfo.rows.length === 0) {
return res.status(404).json({ error: 'File not found' });
}
const { name, type } = fileInfo.rows[0];
const filePath = path.join(__dirname, 'db-convert/db/files/uploads', mongoId);
// Set proper content type and filename
if (type) {
res.set('Content-Type', type);
}
if (name) {
res.set('Content-Disposition', `inline; filename="${name}"`);
}
// Send the file
res.sendFile(filePath, (err) => {
if (err) {
console.error('Error serving file:', err);
if (!res.headersSent) {
res.status(404).json({ error: 'File not found on disk' });
}
}
});
} catch (error) {
console.error('Error serving upload by ID:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Serve user avatars by mongo_id
router.get('/avatar/:mongoId', async (req, res) => {
try {
const { mongoId } = req.params;
console.log(`[Avatar Debug] Looking up avatar for user mongo_id: ${mongoId}`);
// First try to find avatar by user's avataretag
const userResult = await global.pool.query(`
SELECT avataretag, username FROM users WHERE mongo_id = $1
`, [mongoId]);
let avatarPath = null;
if (userResult.rows.length > 0) {
const username = userResult.rows[0].username;
const avataretag = userResult.rows[0].avataretag;
// Try method 1: Look up by avataretag -> etag (for users with avataretag set)
if (avataretag) {
console.log(`[Avatar Debug] Found user ${username} with avataretag: ${avataretag}`);
const avatarResult = await global.pool.query(`
SELECT url, path FROM avatars WHERE etag = $1
`, [avataretag]);
if (avatarResult.rows.length > 0) {
const dbPath = avatarResult.rows[0].path || avatarResult.rows[0].url;
console.log(`[Avatar Debug] Found avatar record with path: ${dbPath}`);
if (dbPath) {
const pathParts = dbPath.split('/');
for (let i = 0; i < pathParts.length; i++) {
if (pathParts[i].includes('AmazonS3:Avatars') && i + 1 < pathParts.length) {
const avatarMongoId = pathParts[i + 1];
avatarPath = path.join(__dirname, 'db-convert/db/files/avatars', avatarMongoId);
console.log(`[Avatar Debug] Extracted avatar mongo_id: ${avatarMongoId}, full path: ${avatarPath}`);
break;
}
}
}
} else {
console.log(`[Avatar Debug] No avatar record found for etag: ${avataretag}`);
}
}
// Try method 2: Look up by userid directly (for users without avataretag)
if (!avatarPath) {
console.log(`[Avatar Debug] Trying direct userid lookup for user ${username} (${mongoId})`);
const avatarResult = await global.pool.query(`
SELECT url, path FROM avatars WHERE userid = $1
`, [mongoId]);
if (avatarResult.rows.length > 0) {
const dbPath = avatarResult.rows[0].path || avatarResult.rows[0].url;
console.log(`[Avatar Debug] Found avatar record by userid with path: ${dbPath}`);
if (dbPath) {
const pathParts = dbPath.split('/');
for (let i = 0; i < pathParts.length; i++) {
if (pathParts[i].includes('AmazonS3:Avatars') && i + 1 < pathParts.length) {
const avatarMongoId = pathParts[i + 1];
avatarPath = path.join(__dirname, 'db-convert/db/files/avatars', avatarMongoId);
console.log(`[Avatar Debug] Extracted avatar mongo_id: ${avatarMongoId}, full path: ${avatarPath}`);
break;
}
}
}
} else {
console.log(`[Avatar Debug] No avatar record found for userid: ${mongoId}`);
}
}
} else {
console.log(`[Avatar Debug] No user found for mongo_id: ${mongoId}`);
}
// Fallback: try direct lookup by user mongo_id
if (!avatarPath) {
avatarPath = path.join(__dirname, 'db-convert/db/files/avatars', mongoId);
console.log(`[Avatar Debug] Using fallback path: ${avatarPath}`);
}
// Set proper content type for images
res.set('Content-Type', 'image/jpeg'); // Most avatars are likely JPEG
// Send the file
res.sendFile(avatarPath, (err) => {
if (err) {
// If avatar doesn't exist, send a default 404 or generate initials
console.log(`[Avatar Debug] Avatar file not found at path: ${avatarPath}, error:`, err.message);
if (!res.headersSent) {
res.status(404).json({ error: 'Avatar not found' });
}
} else {
console.log(`[Avatar Debug] Successfully served avatar from: ${avatarPath}`);
}
});
} catch (error) {
console.error('Error serving avatar:', error);
res.status(500).json({ error: 'Server error' });
}
});
// Serve avatars statically as fallback
router.use('/files/avatars', express.static(path.join(__dirname, 'db-convert/db/files/avatars')));
// Get all users for the "view as" dropdown (active and inactive)
router.get('/users', async (req, res) => {
try {
const result = await global.pool.query(`
SELECT id, username, name, type, active, status, lastlogin,
statustext, utcoffset, statusconnection, mongo_id, avataretag
FROM users
WHERE type = 'user'
ORDER BY
active DESC, -- Active users first
CASE
WHEN status = 'online' THEN 1
WHEN status = 'away' THEN 2
WHEN status = 'busy' THEN 3
ELSE 4
END,
name ASC
`);
res.json({
status: 'success',
users: result.rows
});
} catch (error) {
console.error('Error fetching users:', error);
res.status(500).json({
status: 'error',
error: 'Failed to fetch users',
details: error.message
});
}
});
// Get rooms for a specific user with enhanced room names for direct messages
router.get('/users/:userId/rooms', async (req, res) => {
const { userId } = req.params;
try {
// Get the current user's mongo_id for filtering
const userResult = await global.pool.query(`
SELECT mongo_id, username FROM users WHERE id = $1
`, [userId]);
if (userResult.rows.length === 0) {
return res.status(404).json({
status: 'error',
error: 'User not found'
});
}
const currentUserMongoId = userResult.rows[0].mongo_id;
const currentUsername = userResult.rows[0].username;
// Get rooms where the user is a member with proper naming from subscription table
// Include archived and closed rooms but sort them at the bottom
const result = await global.pool.query(`
SELECT DISTINCT
r.id,
r.mongo_id as room_mongo_id,
r.name,
r.fname,
r.t as type,
r.msgs,
r.lm as last_message_date,
r.usernames,
r.uids,
r.userscount,
r.description,
r.teamid,
r.archived,
s.open,
-- Use the subscription's name for direct messages (excludes current user)
-- For channels/groups, use room's fname or name
CASE
WHEN r.t = 'd' THEN COALESCE(s.fname, s.name, 'Unknown User')
ELSE COALESCE(r.fname, r.name, 'Unnamed Room')
END as display_name
FROM room r
JOIN subscription s ON s.rid = r.mongo_id
WHERE s.u->>'_id' = $1
ORDER BY
s.open DESC NULLS LAST, -- Open rooms first
r.archived NULLS FIRST, -- Non-archived first (nulls treated as false)
r.lm DESC NULLS LAST
LIMIT 50
`, [currentUserMongoId]);
// Enhance rooms with participant information for direct messages
const enhancedRooms = await Promise.all(result.rows.map(async (room) => {
if (room.type === 'd' && room.uids) {
// Get participant info (excluding current user) for direct messages
const participantResult = await global.pool.query(`
SELECT u.username, u.name, u.mongo_id, u.avataretag
FROM users u
WHERE u.mongo_id = ANY($1::text[])
AND u.mongo_id != $2
`, [room.uids, currentUserMongoId]);
room.participants = participantResult.rows;
}
return room;
}));
res.json({
status: 'success',
rooms: enhancedRooms
});
} catch (error) {
console.error('Error fetching user rooms:', error);
res.status(500).json({
status: 'error',
error: 'Failed to fetch user rooms',
details: error.message
});
}
});
// Get room details including participants
router.get('/rooms/:roomId', async (req, res) => {
const { roomId } = req.params;
const { userId } = req.query; // Accept current user ID as query parameter
try {
const result = await global.pool.query(`
SELECT r.id, r.name, r.fname, r.t as type, r.msgs, r.description,
r.lm as last_message_date, r.usernames, r.uids, r.userscount, r.teamid
FROM room r
WHERE r.id = $1
`, [roomId]);
if (result.rows.length === 0) {
return res.status(404).json({
status: 'error',
error: 'Room not found'
});
}
const room = result.rows[0];
// For direct messages, get the proper display name based on current user
if (room.type === 'd' && room.uids && userId) {
// Get current user's mongo_id
const userResult = await global.pool.query(`
SELECT mongo_id FROM users WHERE id = $1
`, [userId]);
if (userResult.rows.length > 0) {
const currentUserMongoId = userResult.rows[0].mongo_id;
// Get display name from subscription table for this user
// Use room mongo_id to match with subscription.rid
const roomMongoResult = await global.pool.query(`
SELECT mongo_id FROM room WHERE id = $1
`, [roomId]);
if (roomMongoResult.rows.length > 0) {
const roomMongoId = roomMongoResult.rows[0].mongo_id;
const subscriptionResult = await global.pool.query(`
SELECT fname, name FROM subscription
WHERE rid = $1 AND u->>'_id' = $2
`, [roomMongoId, currentUserMongoId]);
if (subscriptionResult.rows.length > 0) {
const sub = subscriptionResult.rows[0];
room.display_name = sub.fname || sub.name || 'Unknown User';
}
}
}
// Get all participants for additional info
const participantResult = await global.pool.query(`
SELECT username, name
FROM users
WHERE mongo_id = ANY($1::text[])
`, [room.uids]);
room.participants = participantResult.rows;
} else {
// For channels/groups, use room's fname or name
room.display_name = room.fname || room.name || 'Unnamed Room';
}
res.json({
status: 'success',
room: room
});
} catch (error) {
console.error('Error fetching room details:', error);
res.status(500).json({
status: 'error',
error: 'Failed to fetch room details',
details: error.message
});
}
});
// Get messages for a specific room (fast, without attachments)
router.get('/rooms/:roomId/messages', async (req, res) => {
const { roomId } = req.params;
const { limit = 50, offset = 0, before } = req.query;
try {
// Fast query - just get messages without expensive attachment joins
let query = `
SELECT m.id, m.msg, m.ts, m.u, m._updatedat, m.urls, m.mentions, m.md
FROM message m
JOIN room r ON m.rid = r.mongo_id
WHERE r.id = $1
`;
const params = [roomId];
if (before) {
query += ` AND m.ts < $${params.length + 1}`;
params.push(before);
}
query += ` ORDER BY m.ts DESC LIMIT $${params.length + 1} OFFSET $${params.length + 2}`;
params.push(limit, offset);
const result = await global.pool.query(query, params);
// Add empty attachments array for now - attachments will be loaded separately if needed
const messages = result.rows.map(msg => ({
...msg,
attachments: []
}));
res.json({
status: 'success',
messages: messages.reverse() // Reverse to show oldest first
});
} catch (error) {
console.error('Error fetching messages:', error);
res.status(500).json({
status: 'error',
error: 'Failed to fetch messages',
details: error.message
});
}
});
// Get attachments for specific messages (called separately for performance)
router.post('/messages/attachments', async (req, res) => {
const { messageIds } = req.body;
if (!messageIds || !Array.isArray(messageIds) || messageIds.length === 0) {
return res.json({ status: 'success', attachments: {} });
}
try {
// Get room mongo_id from first message to limit search scope
const roomQuery = await global.pool.query(`
SELECT r.mongo_id as room_mongo_id
FROM message m
JOIN room r ON m.rid = r.mongo_id
WHERE m.id = $1
LIMIT 1
`, [messageIds[0]]);
if (roomQuery.rows.length === 0) {
return res.json({ status: 'success', attachments: {} });
}
const roomMongoId = roomQuery.rows[0].room_mongo_id;
// Get messages and their upload timestamps
const messagesQuery = await global.pool.query(`
SELECT m.id, m.ts, m.u->>'_id' as user_id
FROM message m
WHERE m.id = ANY($1::int[])
`, [messageIds]);
if (messagesQuery.rows.length === 0) {
return res.json({ status: 'success', attachments: {} });
}
// Build a map of user_id -> array of message timestamps for efficient lookup
const userTimeMap = {};
const messageMap = {};
messagesQuery.rows.forEach(msg => {
if (!userTimeMap[msg.user_id]) {
userTimeMap[msg.user_id] = [];
}
userTimeMap[msg.user_id].push(msg.ts);
messageMap[msg.id] = { ts: msg.ts, user_id: msg.user_id };
});
// Get attachments for this room and these users
const uploadsQuery = await global.pool.query(`
SELECT mongo_id, name, size, type, url, path, typegroup, identify,
userid, uploadedat
FROM uploads
WHERE rid = $1
AND userid = ANY($2::text[])
ORDER BY uploadedat
`, [roomMongoId, Object.keys(userTimeMap)]);
// Match attachments to messages based on timestamp proximity (within 5 minutes)
const attachmentsByMessage = {};
uploadsQuery.rows.forEach(upload => {
const uploadTime = new Date(upload.uploadedat).getTime();
// Find the closest message from this user within 5 minutes
let closestMessageId = null;
let closestTimeDiff = Infinity;
Object.entries(messageMap).forEach(([msgId, msgData]) => {
if (msgData.user_id === upload.userid) {
const msgTime = new Date(msgData.ts).getTime();
const timeDiff = Math.abs(uploadTime - msgTime);
if (timeDiff < 300000 && timeDiff < closestTimeDiff) { // 5 minutes = 300000ms
closestMessageId = msgId;
closestTimeDiff = timeDiff;
}
}
});
if (closestMessageId) {
if (!attachmentsByMessage[closestMessageId]) {
attachmentsByMessage[closestMessageId] = [];
}
attachmentsByMessage[closestMessageId].push({
id: upload.id,
mongo_id: upload.mongo_id,
name: upload.name,
size: upload.size,
type: upload.type,
url: upload.url,
path: upload.path,
typegroup: upload.typegroup,
identify: upload.identify
});
}
});
res.json({
status: 'success',
attachments: attachmentsByMessage
});
} catch (error) {
console.error('Error fetching message attachments:', error);
res.status(500).json({
status: 'error',
error: 'Failed to fetch attachments',
details: error.message
});
}
});
// Search messages in accessible rooms for a user
router.get('/users/:userId/search', async (req, res) => {
const { userId } = req.params;
const { q, limit = 20 } = req.query;
if (!q || q.length < 2) {
return res.status(400).json({
status: 'error',
error: 'Search query must be at least 2 characters'
});
}
try {
const userResult = await global.pool.query(`
SELECT mongo_id FROM users WHERE id = $1
`, [userId]);
if (userResult.rows.length === 0) {
return res.status(404).json({
status: 'error',
error: 'User not found'
});
}
const currentUserMongoId = userResult.rows[0].mongo_id;
const result = await global.pool.query(`
SELECT m.id, m.msg, m.ts, m.u, r.id as room_id, r.name as room_name, r.fname as room_fname, r.t as room_type
FROM message m
JOIN room r ON m.rid = r.mongo_id
JOIN subscription s ON s.rid = r.mongo_id AND s.u->>'_id' = $1
WHERE m.msg ILIKE $2
AND r.archived IS NOT TRUE
ORDER BY m.ts DESC
LIMIT $3
`, [currentUserMongoId, `%${q}%`, limit]);
res.json({
status: 'success',
results: result.rows
});
} catch (error) {
console.error('Error searching messages:', error);
res.status(500).json({
status: 'error',
error: 'Failed to search messages',
details: error.message
});
}
});
module.exports = router;

View File

@@ -33,7 +33,7 @@ global.pool = pool;
app.use(express.json());
app.use(morgan('combined'));
app.use(cors({
origin: ['http://localhost:5175', 'http://localhost:5174', 'https://inventory.kent.pw', 'https://tools.acherryontop.com', 'https://tools.acherryontop.com'],
origin: ['http://localhost:5175', 'http://localhost:5174', 'https://inventory.kent.pw', 'https://acot.site', 'https://tools.acherryontop.com'],
credentials: true
}));

View File

@@ -0,0 +1,26 @@
# Chat Server Database Configuration Template
# Copy this to your .env file and update the values for your new server
# Database Configuration for New Server
CHAT_DB_HOST=your-new-server-ip-or-hostname
CHAT_DB_PORT=5432
CHAT_DB_NAME=rocketchat_converted
CHAT_DB_USER=rocketchat_user
CHAT_DB_PASSWORD=your-secure-password
# Chat Server Port
CHAT_PORT=3014
# Example configuration:
# CHAT_DB_HOST=192.168.1.100
# CHAT_DB_PORT=5432
# CHAT_DB_NAME=rocketchat_converted
# CHAT_DB_USER=rocketchat_user
# CHAT_DB_PASSWORD=MySecureP@ssw0rd123
# Notes:
# - Replace 'your-new-server-ip-or-hostname' with actual server address
# - Use a strong password for CHAT_DB_PASSWORD
# - Ensure the new server allows connections from your application server
# - Update any firewall rules to allow PostgreSQL connections (port 5432)
# - Test connectivity before updating production configuration

View File

@@ -0,0 +1,231 @@
#!/usr/bin/env node
/**
* Chat Database Migration Verification Script
*
* This script verifies that the chat database migration was successful
* by comparing record counts and testing basic functionality.
*/
require('dotenv').config({ path: '../.env' });
const { Pool } = require('pg');
// Database configuration
const pool = new Pool({
host: process.env.CHAT_DB_HOST || 'localhost',
user: process.env.CHAT_DB_USER || 'rocketchat_user',
password: process.env.CHAT_DB_PASSWORD,
database: process.env.CHAT_DB_NAME || 'rocketchat_converted',
port: process.env.CHAT_DB_PORT || 5432,
});
const originalStats = process.argv[2] ? JSON.parse(process.argv[2]) : null;
async function verifyMigration() {
console.log('🔍 Starting migration verification...\n');
try {
// Test basic connection
console.log('🔗 Testing database connection...');
const versionResult = await pool.query('SELECT version()');
console.log('✅ Database connection successful');
console.log(` PostgreSQL version: ${versionResult.rows[0].version.split(' ')[1]}\n`);
// Get table statistics
console.log('📊 Checking table statistics...');
const statsResult = await pool.query(`
SELECT
tablename,
n_live_tup as row_count,
n_dead_tup as dead_rows,
schemaname
FROM pg_stat_user_tables
WHERE schemaname = 'public'
ORDER BY n_live_tup DESC
`);
if (statsResult.rows.length === 0) {
console.log('❌ No tables found! Migration may have failed.');
return false;
}
console.log('📋 Table Statistics:');
console.log(' Table Name | Row Count | Dead Rows');
console.log(' -------------------|-----------|----------');
let totalRows = 0;
const tableStats = {};
for (const row of statsResult.rows) {
const rowCount = parseInt(row.row_count) || 0;
const deadRows = parseInt(row.dead_rows) || 0;
totalRows += rowCount;
tableStats[row.tablename] = rowCount;
console.log(` ${row.tablename.padEnd(18)} | ${rowCount.toString().padStart(9)} | ${deadRows.toString().padStart(8)}`);
}
console.log(`\n Total rows across all tables: ${totalRows}\n`);
// Verify critical tables exist and have data
const criticalTables = ['users', 'message', 'room'];
console.log('🔑 Checking critical tables...');
for (const table of criticalTables) {
if (tableStats[table] > 0) {
console.log(`${table}: ${tableStats[table]} rows`);
} else if (tableStats[table] === 0) {
console.log(`⚠️ ${table}: table exists but is empty`);
} else {
console.log(`${table}: table not found`);
return false;
}
}
// Test specific functionality
console.log('\n🧪 Testing specific functionality...');
// Test users table
const userTest = await pool.query(`
SELECT COUNT(*) as total_users,
COUNT(*) FILTER (WHERE active = true) as active_users,
COUNT(*) FILTER (WHERE type = 'user') as regular_users
FROM users
`);
if (userTest.rows[0]) {
const { total_users, active_users, regular_users } = userTest.rows[0];
console.log(`✅ Users: ${total_users} total, ${active_users} active, ${regular_users} regular users`);
}
// Test messages table
const messageTest = await pool.query(`
SELECT COUNT(*) as total_messages,
COUNT(DISTINCT rid) as unique_rooms,
MIN(ts) as oldest_message,
MAX(ts) as newest_message
FROM message
`);
if (messageTest.rows[0]) {
const { total_messages, unique_rooms, oldest_message, newest_message } = messageTest.rows[0];
console.log(`✅ Messages: ${total_messages} total across ${unique_rooms} rooms`);
if (oldest_message && newest_message) {
console.log(` Date range: ${oldest_message.toISOString().split('T')[0]} to ${newest_message.toISOString().split('T')[0]}`);
}
}
// Test rooms table
const roomTest = await pool.query(`
SELECT COUNT(*) as total_rooms,
COUNT(*) FILTER (WHERE t = 'c') as channels,
COUNT(*) FILTER (WHERE t = 'p') as private_groups,
COUNT(*) FILTER (WHERE t = 'd') as direct_messages
FROM room
`);
if (roomTest.rows[0]) {
const { total_rooms, channels, private_groups, direct_messages } = roomTest.rows[0];
console.log(`✅ Rooms: ${total_rooms} total (${channels} channels, ${private_groups} private, ${direct_messages} DMs)`);
}
// Test file uploads if table exists
if (tableStats.uploads > 0) {
const uploadTest = await pool.query(`
SELECT COUNT(*) as total_uploads,
COUNT(DISTINCT typegroup) as file_types,
pg_size_pretty(SUM(size)) as total_size
FROM uploads
WHERE size IS NOT NULL
`);
if (uploadTest.rows[0]) {
const { total_uploads, file_types, total_size } = uploadTest.rows[0];
console.log(`✅ Uploads: ${total_uploads} files, ${file_types} types, ${total_size || 'unknown size'}`);
}
}
// Test server health endpoint simulation
console.log('\n🏥 Testing application endpoints simulation...');
try {
const healthTest = await pool.query(`
SELECT
(SELECT COUNT(*) FROM users WHERE active = true) as active_users,
(SELECT COUNT(*) FROM message) as total_messages,
(SELECT COUNT(*) FROM room) as total_rooms
`);
if (healthTest.rows[0]) {
const stats = healthTest.rows[0];
console.log('✅ Health check simulation passed');
console.log(` Active users: ${stats.active_users}`);
console.log(` Total messages: ${stats.total_messages}`);
console.log(` Total rooms: ${stats.total_rooms}`);
}
} catch (error) {
console.log(`⚠️ Health check simulation failed: ${error.message}`);
}
// Check indexes
console.log('\n📇 Checking database indexes...');
const indexResult = await pool.query(`
SELECT
schemaname,
tablename,
indexname,
indexdef
FROM pg_indexes
WHERE schemaname = 'public'
ORDER BY tablename, indexname
`);
const indexesByTable = {};
for (const idx of indexResult.rows) {
if (!indexesByTable[idx.tablename]) {
indexesByTable[idx.tablename] = [];
}
indexesByTable[idx.tablename].push(idx.indexname);
}
for (const [table, indexes] of Object.entries(indexesByTable)) {
console.log(` ${table}: ${indexes.length} indexes`);
}
console.log('\n🎉 Migration verification completed successfully!');
console.log('\n✅ Summary:');
console.log(` - Database connection: Working`);
console.log(` - Tables created: ${statsResult.rows.length}`);
console.log(` - Total data rows: ${totalRows}`);
console.log(` - Critical tables: All present`);
console.log(` - Indexes: ${indexResult.rows.length} total`);
console.log('\n🚀 Next steps:');
console.log(' 1. Update your application configuration');
console.log(' 2. Start your chat server');
console.log(' 3. Test chat functionality in the browser');
console.log(' 4. Monitor logs for any issues');
return true;
} catch (error) {
console.error('❌ Migration verification failed:', error.message);
console.error('\n🔧 Troubleshooting steps:');
console.error(' 1. Check database connection settings');
console.error(' 2. Verify database and user exist');
console.error(' 3. Check PostgreSQL logs');
console.error(' 4. Ensure import completed without errors');
return false;
} finally {
await pool.end();
}
}
// Run verification
if (require.main === module) {
verifyMigration().then(success => {
process.exit(success ? 0 : 1);
});
}
module.exports = { verifyMigration };

View File

@@ -0,0 +1,20 @@
# Caching Server Configuration
PORT=3010
NODE_ENV=production
# Database Configuration
MONGODB_URI=mongodb://dashboard_user:WDRFWiGXEeaC6aAyUKuT@localhost:27017/dashboard?authSource=dashboard
REDIS_URL=redis://:Wgj32YXxxVLtPZoVzUnP@localhost:6379
# Gorgias
GORGIAS_API_USERNAME=matt@acherryontop.com
GORGIAS_API_PASSWORD=d2ed0d23d2a7bf11a633a12fb260769f4e4a970d440693e7d64b8d2223fa6503
# GA4 credentials
GA_PROPERTY_ID=281045851
GOOGLE_APPLICATION_CREDENTIALS_JSON={"type": "service_account","project_id": "acot-stats","private_key_id": "259d1fd9864efbfa38b8ba02fdd74dc008ace3c5","private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5Y6foai8WF98k\nIA0yLn94Y3lmDYlyvI9xL2YqSZSyvgK35wdWRTIaEvHKdiUWuYi3ZPdkYmz1OYiV\njVfR2g+mFpA7MI/JMwyGWwjnV4WW2q6INfgi/PvHlbP3LyyQo0B8CvAY0CHqrpDs\nlJQhAkqmteU24dqcdZoV3vM8JMsDiXm44DqwXsEfWibKv4i0mWNkwiEQr0yImHwb\nbjgclwVLLi5kdM2+49PXr47LCODdL+xmX0uSdgSG6XYqEIVsEOXIUJKzqUe036b/\nEFQ0BxWdJBWs/MYOapn/NNv+Mts+am2ipUuIcgPbOut4xa2Fkky93WnJf0tB+VJP\njFnyZJhdAgMBAAECggEAC980Cp/4zvSNZMNWr6l8ST8u2thavnRmcoGYtx7ffQjK\nT3Dl2TefgJLzqpr2lLt3OVint7p5LsUAmE8lBLpu+RxbH9HkIKbPvQTfD5gyZQQx\nBruqCGzkn2st9fzZNj6gwQYe9P/TGYkUnR8wqI0nLwDZTQful3QNKixiWC4lAAoK\nqdd6H++pqjVUiTqgFwFD3zBAhO0Lp8m/c5vTRT5kxi0wCTK66FaaGLr2OwZHcohp\nE8rEcTZ5kaJzBwqEz522R6ufQqN1Swoq4K6Ul3aAc59539VdrLNs++/eRH38MMVq\n5UTwBrH+zIkXIYv4mtGpR1NWGO2bZ652GzGXNEXcQQKBgQD9WsMmioIeWR9P9I0r\nIY+yyxz1EyscutUtnOtROT36OxokrzQaAKDz/OC3jVnhZSkzG6RcmmK/AJrcU+2m\n1L4mZGfF3DdeTqtK/KkNzGs9yRPDkbb/MF0wgtcvfE8tJH/suiDJKQNsjeaQIQW3\n4NvDxs0w60m9r9tk1CQau94ovQKBgQC7UzeA0mDSxIB5agGbvnzaJJTvAFvnCvhz\nu3ZakTlNecAHu4eOMc0+OCHFPLJlLL4b0oraOxZIszX9BTlgcstBmTUk03TibNsS\nsDiImHFC4hE5x6EPdifnkVFUXPMZ/eF0mHUPBEn41ipw1hoLfl6W+aYW9QUxBMWA\nzdMH4rg4IQKBgQCFcMaUiCNchKhfXnj0HKspCp3n3v64FReu/JVcpH+mSnbMl5Mj\nlu0vVSOuyb5rXvLCPm7lb1NPMqxeG75yPl8grYWSyxhGjbzetBD+eYqKclv8h8UQ\nx5JtuJxKIHk7V5whPS+DhByPknW7uAjg/ogBp7XvbB3c0MEHbEzP3991KQKBgC+a\n610Kmd6WX4v7e6Mn2rTZXRwL/E8QA6nttxs3Etf0m++bIczqLR2lyDdGwJNjtoB9\nlhn1sCkTmiHOBRHUuoDWPaI5NtggD+CE9ikIjKgRqY0EhZLXVTbNQFzvLjypv3UR\nFZaWYXIigzCfyIipOcKmeSYWaJZXfxXHuNylKmnhAoGAFa84AuOOGUr+pEvtUzIr\nvBKu1mnQbbsLEhgf3Tw88K3sO5OlguAwBEvD4eitj/aU5u2vJJhFa67cuERLsZru\n0sjtQwP6CJbWF4uaH0Hso4KQvnwl4BfdKwUncqoKtHrQiuGMvr5P5G941+Ax8brE\nJlC2e/RPUQKxScpK3nNK9mc=\n-----END PRIVATE KEY-----\n","client_email": "matt-dashboard@acot-stats.iam.gserviceaccount.com","client_id": "106112731322970982546","auth_uri": "https://accounts.google.com/o/oauth2/auth","token_uri": "https://oauth2.googleapis.com/token","auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs","client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/matt-dashboard%40acot-stats.iam.gserviceaccount.com","universe_domain": "googleapis.com"}
# Logging
LOG_LEVEL=info
LOG_MAX_SIZE=10m
LOG_MAX_FILES=5

View File

@@ -0,0 +1,205 @@
# ACOT Server
This server replaces the Klaviyo integration with direct database queries to the production MySQL database via SSH tunnel. It provides seamless API compatibility for all frontend components without requiring any frontend changes.
## Setup
1. **Environment Variables**: Copy `.env.example` to `.env` and configure:
```
DB_HOST=localhost
DB_PORT=3306
DB_USER=your_db_user
DB_PASSWORD=your_db_password
DB_NAME=your_db_name
PORT=3007
NODE_ENV=development
```
2. **SSH Tunnel**: Ensure your SSH tunnel to the production database is running on localhost:3306.
3. **Install Dependencies**:
```bash
npm install
```
4. **Start Server**:
```bash
npm start
```
## API Endpoints
All endpoints provide exact API compatibility with the previous Klaviyo implementation:
### Main Statistics
- `GET /api/acot/events/stats` - Complete statistics dashboard data
- Query params: `timeRange` (today, yesterday, thisWeek, lastWeek, thisMonth, lastMonth, last7days, last30days, last90days) or `startDate`/`endDate` for custom ranges
- Returns: Revenue, orders, AOV, shipping data, order types, brands/categories, refunds, cancellations, best day, peak hour, order ranges, period progress, projections
### Daily Details
- `GET /api/acot/events/stats/details` - Daily breakdown with previous period comparisons
- Query params: `timeRange`, `metric` (revenue, orders, average_order, etc.), `daily=true`
- Returns: Array of daily data points with trend comparisons
### Products
- `GET /api/acot/events/products` - Top products with sales data
- Query params: `timeRange`
- Returns: Product list with images, sales quantities, revenue, and order counts
### Projections
- `GET /api/acot/events/projection` - Smart revenue projections for incomplete periods
- Query params: `timeRange`
- Returns: Projected revenue with confidence levels based on historical patterns
### Health Check
- `GET /api/acot/test` - Server health and database connectivity test
## Database Schema
The server queries the following main tables:
### Orders (`_order`)
- **Key fields**: `order_id`, `date_placed`, `summary_total`, `order_status`, `ship_method_selected`, `stats_waiting_preorder`
- **Valid orders**: `order_status > 15`
- **Cancelled orders**: `order_status = 15`
- **Shipped orders**: `order_status IN (100, 92)`
- **Pre-orders**: `stats_waiting_preorder > 0`
- **Local pickup**: `ship_method_selected = 'localpickup'`
- **On-hold orders**: `ship_method_selected = 'holdit'`
### Order Items (`order_items`)
- **Fields**: `order_id`, `prod_pid`, `qty_ordered`, `prod_price`
- **Purpose**: Links orders to products for detailed analysis
### Products (`products`)
- **Fields**: `pid`, `description` (product name), `company`
- **Purpose**: Product information and brand data
### Product Images (`product_images`)
- **Fields**: `pid`, `iid`, `order` (priority)
- **Primary image**: `order = 255` (highest priority)
- **Image URL generation**: `https://sbing.com/i/products/0000/{prefix}/{pid}-{type}-{iid}.jpg`
### Payments (`order_payment`)
- **Refunds**: `payment_amount < 0`
- **Purpose**: Track refund amounts and counts
## Business Logic
### Time Handling
- **Timezone**: All calculations in UTC-5 (Eastern Time)
- **Business Day**: 1 AM - 12:59 AM Eastern (25-hour business day)
- **Format**: MySQL DATETIME format (YYYY-MM-DD HH:MM:SS)
- **Period Boundaries**: Calculated using `timeUtils.js` for consistent time range handling
### Order Processing
- **Revenue Calculation**: Only includes orders with `order_status > 15`
- **Order Types**:
- Pre-orders: `stats_waiting_preorder > 0`
- Local pickup: `ship_method_selected = 'localpickup'`
- On-hold: `ship_method_selected = 'holdit'`
- **Shipping Methods**: Mapped to friendly names (e.g., `usps_ground_advantage` → "USPS Ground Advantage")
### Projections
- **Period Progress**: Calculated based on current time within the selected period
- **Simple Projection**: Linear extrapolation based on current progress
- **Smart Projection**: Uses historical data patterns for more accurate forecasting
- **Confidence Levels**: Based on data consistency and historical accuracy
### Image URL Generation
- **Pattern**: `https://sbing.com/i/products/0000/{prefix}/{pid}-{type}-{iid}.jpg`
- **Prefix**: First 2 digits of product ID
- **Type**: "main" for primary images
- **Fallback**: Uses primary image (order=255) when available
## Frontend Integration
### Service Layer (`services/acotService.js`)
- **Purpose**: Replaces direct Klaviyo API calls with acot-server calls
- **Methods**: `getStats()`, `getStatsDetails()`, `getProducts()`, `getProjection()`
- **Logging**: Axios interceptors for request/response logging
- **Environment**: Automatic URL handling (proxy in dev, direct in production)
### Component Updates
All 5 main components updated to use `acotService`:
- **StatCards.jsx**: Main dashboard statistics
- **MiniStatCards.jsx**: Compact statistics view
- **SalesChart.jsx**: Revenue and order trends
- **MiniSalesChart.jsx**: Compact chart view
- **ProductGrid.jsx**: Top products table
### Proxy Configuration (`vite.config.js`)
```javascript
'/api/acot': {
target: 'http://localhost:3007',
changeOrigin: true,
secure: false
}
```
## Key Features
### Complete Business Intelligence
- **Revenue Analytics**: Total revenue, trends, projections
- **Order Analysis**: Counts, types, status tracking
- **Product Performance**: Top sellers, revenue contribution
- **Shipping Intelligence**: Methods, locations, distribution
- **Customer Insights**: Order value ranges, patterns
- **Operational Metrics**: Refunds, cancellations, peak hours
### Performance Optimizations
- **Connection Pooling**: Efficient database connection management
- **Query Optimization**: Indexed queries with proper WHERE clauses
- **Caching Strategy**: Frontend caching for detail views
- **Batch Processing**: Efficient data aggregation
### Error Handling
- **Database Connectivity**: Graceful handling of connection issues
- **Query Failures**: Detailed error logging and user-friendly messages
- **Data Validation**: Input sanitization and validation
- **Fallback Mechanisms**: Default values for missing data
## Simplified Elements
Due to database complexity, some features are simplified:
- **Brands**: Shows "Various Brands" (companies table structure complex)
- **Categories**: Shows "General" (category relationships complex)
These can be enhanced in future iterations with proper category mapping.
## Testing
Test the server functionality:
```bash
# Health check
curl http://localhost:3007/api/acot/test
# Today's stats
curl http://localhost:3007/api/acot/events/stats?timeRange=today
# Last 30 days with details
curl http://localhost:3007/api/acot/events/stats/details?timeRange=last30days&daily=true
# Top products
curl http://localhost:3007/api/acot/events/products?timeRange=thisWeek
# Revenue projection
curl http://localhost:3007/api/acot/events/projection?timeRange=today
```
## Development Notes
- **No Frontend Changes**: Complete drop-in replacement for Klaviyo
- **API Compatibility**: Maintains exact response structure
- **Business Logic**: Implements all complex e-commerce calculations
- **Scalability**: Designed for production workloads
- **Maintainability**: Well-documented code with clear separation of concerns
## Future Enhancements
- Enhanced category and brand mapping
- Real-time notifications for significant events
- Advanced analytics and forecasting
- Customer segmentation analysis
- Inventory integration

View File

@@ -0,0 +1,297 @@
const { Client } = require('ssh2');
const mysql = require('mysql2/promise');
const fs = require('fs');
// Connection pool configuration
const connectionPool = {
connections: [],
maxConnections: 20,
currentConnections: 0,
pendingRequests: [],
// Cache for query results (key: query string, value: {data, timestamp})
queryCache: new Map(),
// Cache duration for different query types in milliseconds
cacheDuration: {
'stats': 60 * 1000, // 1 minute for stats
'products': 5 * 60 * 1000, // 5 minutes for products
'orders': 60 * 1000, // 1 minute for orders
'default': 60 * 1000 // 1 minute default
},
// Circuit breaker state
circuitBreaker: {
failures: 0,
lastFailure: 0,
isOpen: false,
threshold: 5,
timeout: 30000 // 30 seconds
}
};
/**
* Get a database connection from the pool
* @returns {Promise<{connection: object, release: function}>} The database connection and release function
*/
async function getDbConnection() {
return new Promise(async (resolve, reject) => {
// Check circuit breaker
const now = Date.now();
if (connectionPool.circuitBreaker.isOpen) {
if (now - connectionPool.circuitBreaker.lastFailure > connectionPool.circuitBreaker.timeout) {
// Reset circuit breaker
connectionPool.circuitBreaker.isOpen = false;
connectionPool.circuitBreaker.failures = 0;
console.log('Circuit breaker reset');
} else {
reject(new Error('Circuit breaker is open - too many connection failures'));
return;
}
}
// Check if there's an available connection in the pool
if (connectionPool.connections.length > 0) {
const conn = connectionPool.connections.pop();
console.log(`Using pooled connection. Pool size: ${connectionPool.connections.length}`);
resolve({
connection: conn.connection,
release: () => releaseConnection(conn)
});
return;
}
// If we haven't reached max connections, create a new one
if (connectionPool.currentConnections < connectionPool.maxConnections) {
try {
console.log(`Creating new connection. Current: ${connectionPool.currentConnections}/${connectionPool.maxConnections}`);
connectionPool.currentConnections++;
const tunnel = await setupSshTunnel();
const { ssh, stream, dbConfig } = tunnel;
const connection = await mysql.createConnection({
...dbConfig,
stream
});
const conn = { ssh, connection, inUse: true, created: Date.now() };
console.log('Database connection established');
// Reset circuit breaker on successful connection
if (connectionPool.circuitBreaker.failures > 0) {
connectionPool.circuitBreaker.failures = 0;
connectionPool.circuitBreaker.isOpen = false;
}
resolve({
connection: conn.connection,
release: () => releaseConnection(conn)
});
} catch (error) {
connectionPool.currentConnections--;
// Track circuit breaker failures
connectionPool.circuitBreaker.failures++;
connectionPool.circuitBreaker.lastFailure = Date.now();
if (connectionPool.circuitBreaker.failures >= connectionPool.circuitBreaker.threshold) {
connectionPool.circuitBreaker.isOpen = true;
console.log(`Circuit breaker opened after ${connectionPool.circuitBreaker.failures} failures`);
}
reject(error);
}
return;
}
// Pool is full, queue the request with timeout
console.log('Connection pool full, queuing request...');
const timeoutId = setTimeout(() => {
// Remove from queue if still there
const index = connectionPool.pendingRequests.findIndex(req => req.resolve === resolve);
if (index !== -1) {
connectionPool.pendingRequests.splice(index, 1);
reject(new Error('Connection pool queue timeout after 15 seconds'));
}
}, 15000);
connectionPool.pendingRequests.push({
resolve,
reject,
timeoutId,
timestamp: Date.now()
});
});
}
/**
* Release a connection back to the pool
*/
function releaseConnection(conn) {
conn.inUse = false;
// Check if there are pending requests
if (connectionPool.pendingRequests.length > 0) {
const { resolve, timeoutId } = connectionPool.pendingRequests.shift();
// Clear the timeout since we're serving the request
if (timeoutId) {
clearTimeout(timeoutId);
}
conn.inUse = true;
console.log(`Serving queued request. Queue length: ${connectionPool.pendingRequests.length}`);
resolve({
connection: conn.connection,
release: () => releaseConnection(conn)
});
} else {
// Return to pool
connectionPool.connections.push(conn);
console.log(`Connection returned to pool. Pool size: ${connectionPool.connections.length}, Active: ${connectionPool.currentConnections}`);
}
}
/**
* Get cached query results or execute query if not cached
* @param {string} cacheKey - Unique key to identify the query
* @param {string} queryType - Type of query (stats, products, orders, etc.)
* @param {Function} queryFn - Function to execute if cache miss
* @returns {Promise<any>} The query result
*/
async function getCachedQuery(cacheKey, queryType, queryFn) {
// Get cache duration based on query type
const cacheDuration = connectionPool.cacheDuration[queryType] || connectionPool.cacheDuration.default;
// Check if we have a valid cached result
const cachedResult = connectionPool.queryCache.get(cacheKey);
const now = Date.now();
if (cachedResult && (now - cachedResult.timestamp < cacheDuration)) {
console.log(`Cache hit for ${queryType} query: ${cacheKey}`);
return cachedResult.data;
}
// No valid cache found, execute the query
console.log(`Cache miss for ${queryType} query: ${cacheKey}`);
const result = await queryFn();
// Cache the result
connectionPool.queryCache.set(cacheKey, {
data: result,
timestamp: now
});
return result;
}
/**
* Setup SSH tunnel to production database
* @private - Should only be used by getDbConnection
* @returns {Promise<{ssh: object, stream: object, dbConfig: object}>}
*/
async function setupSshTunnel() {
const sshConfig = {
host: process.env.PROD_SSH_HOST,
port: process.env.PROD_SSH_PORT || 22,
username: process.env.PROD_SSH_USER,
privateKey: process.env.PROD_SSH_KEY_PATH
? fs.readFileSync(process.env.PROD_SSH_KEY_PATH)
: undefined,
compress: true
};
const dbConfig = {
host: process.env.PROD_DB_HOST || 'localhost',
user: process.env.PROD_DB_USER,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
port: process.env.PROD_DB_PORT || 3306,
timezone: 'Z'
};
return new Promise((resolve, reject) => {
const ssh = new Client();
ssh.on('error', (err) => {
console.error('SSH connection error:', err);
reject(err);
});
ssh.on('ready', () => {
ssh.forwardOut(
'127.0.0.1',
0,
dbConfig.host,
dbConfig.port,
(err, stream) => {
if (err) reject(err);
resolve({ ssh, stream, dbConfig });
}
);
}).connect(sshConfig);
});
}
/**
* Clear cached query results
* @param {string} [cacheKey] - Specific cache key to clear (clears all if not provided)
*/
function clearQueryCache(cacheKey) {
if (cacheKey) {
connectionPool.queryCache.delete(cacheKey);
console.log(`Cleared cache for key: ${cacheKey}`);
} else {
connectionPool.queryCache.clear();
console.log('Cleared all query cache');
}
}
/**
* Force close all active connections
* Useful for server shutdown or manual connection reset
*/
async function closeAllConnections() {
// Close all pooled connections
for (const conn of connectionPool.connections) {
try {
await conn.connection.end();
conn.ssh.end();
console.log('Closed pooled connection');
} catch (error) {
console.error('Error closing pooled connection:', error);
}
}
// Reset pool state
connectionPool.connections = [];
connectionPool.currentConnections = 0;
connectionPool.pendingRequests = [];
connectionPool.queryCache.clear();
console.log('All connections closed and pool reset');
}
/**
* Get connection pool status for debugging
*/
function getPoolStatus() {
return {
poolSize: connectionPool.connections.length,
activeConnections: connectionPool.currentConnections,
maxConnections: connectionPool.maxConnections,
pendingRequests: connectionPool.pendingRequests.length,
cacheSize: connectionPool.queryCache.size,
queuedRequests: connectionPool.pendingRequests.map(req => ({
waitTime: Date.now() - req.timestamp,
hasTimeout: !!req.timeoutId
}))
};
}
module.exports = {
getDbConnection,
getCachedQuery,
clearQueryCache,
closeAllConnections,
getPoolStatus
};

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,23 @@
{
"name": "acot-server",
"version": "1.0.0",
"description": "A Cherry On Top production database server",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js"
},
"dependencies": {
"express": "^4.18.2",
"cors": "^2.8.5",
"dotenv": "^16.3.1",
"morgan": "^1.10.0",
"ssh2": "^1.14.0",
"mysql2": "^3.6.5",
"compression": "^1.7.4",
"luxon": "^3.5.0"
},
"devDependencies": {
"nodemon": "^3.0.1"
}
}

View File

@@ -0,0 +1,575 @@
const express = require('express');
const { DateTime } = require('luxon');
const { getDbConnection } = require('../db/connection');
const router = express.Router();
const RANGE_BOUNDS = [
10, 20, 30, 40, 50, 60, 70, 80, 90,
100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200,
300, 400, 500, 1000, 1500, 2000
];
const FINAL_BUCKET_KEY = 'PLUS';
function buildRangeDefinitions() {
const ranges = [];
let previous = 0;
for (const bound of RANGE_BOUNDS) {
const label = `$${previous.toLocaleString()} - $${bound.toLocaleString()}`;
const key = bound.toString().padStart(5, '0');
ranges.push({
min: previous,
max: bound,
label,
key,
sort: bound
});
previous = bound;
}
// Remove the 2000+ category - all orders >2000 will go into the 2000 bucket
return ranges;
}
const RANGE_DEFINITIONS = buildRangeDefinitions();
const BUCKET_CASE = (() => {
const parts = [];
for (let i = 0; i < RANGE_BOUNDS.length; i++) {
const bound = RANGE_BOUNDS[i];
const key = bound.toString().padStart(5, '0');
if (i === RANGE_BOUNDS.length - 1) {
// For the last bucket (2000), include all orders >= 1500 (previous bound)
parts.push(`ELSE '${key}'`);
} else {
parts.push(`WHEN o.summary_subtotal <= ${bound} THEN '${key}'`);
}
}
return `CASE\n ${parts.join('\n ')}\n END`;
})();
const DEFAULT_POINT_DOLLAR_VALUE = 0.005; // 1000 points = $5, so 200 points = $1
const DEFAULTS = {
merchantFeePercent: 2.9,
fixedCostPerOrder: 1.5,
pointsPerDollar: 0,
pointsRedemptionRate: 0, // Will be calculated from actual data
pointDollarValue: DEFAULT_POINT_DOLLAR_VALUE,
};
function parseDate(value, fallback) {
if (!value) {
return fallback;
}
const parsed = DateTime.fromISO(value);
if (!parsed.isValid) {
return fallback;
}
return parsed;
}
function formatDateForSql(dt) {
return dt.toFormat('yyyy-LL-dd HH:mm:ss');
}
function getMidpoint(range) {
if (range.max == null) {
return range.min + 200; // Rough estimate for 2000+
}
return (range.min + range.max) / 2;
}
router.get('/promos', async (req, res) => {
let connection;
try {
const { connection: conn, release } = await getDbConnection();
connection = conn;
const releaseConnection = release;
const { startDate, endDate } = req.query || {};
const now = DateTime.now().endOf('day');
const defaultStart = now.minus({ years: 3 }).startOf('day');
const parsedStart = startDate ? parseDate(startDate, defaultStart).startOf('day') : defaultStart;
const parsedEnd = endDate ? parseDate(endDate, now).endOf('day') : now;
const rangeStart = parsedStart <= parsedEnd ? parsedStart : parsedEnd;
const rangeEnd = parsedEnd >= parsedStart ? parsedEnd : parsedStart;
const rangeStartSql = formatDateForSql(rangeStart);
const rangeEndSql = formatDateForSql(rangeEnd);
const sql = `
SELECT
p.promo_id AS id,
p.promo_code AS code,
p.promo_description_online AS description_online,
p.promo_description_private AS description_private,
p.date_start,
p.date_end,
COALESCE(u.usage_count, 0) AS usage_count
FROM promos p
LEFT JOIN (
SELECT
discount_code,
COUNT(DISTINCT order_id) AS usage_count
FROM order_discounts
WHERE discount_type = 10 AND discount_active = 1
GROUP BY discount_code
) u ON u.discount_code = p.promo_id
WHERE p.date_start IS NOT NULL
AND p.date_end IS NOT NULL
AND NOT (p.date_end < ? OR p.date_start > ?)
AND p.store = 1
AND p.date_start >= '2010-01-01'
ORDER BY p.promo_id DESC
LIMIT 200
`;
const [rows] = await connection.execute(sql, [rangeStartSql, rangeEndSql]);
releaseConnection();
const promos = rows.map(row => ({
id: Number(row.id),
code: row.code,
description: row.description_online || row.description_private || '',
privateDescription: row.description_private || '',
promo_description_online: row.description_online || '',
promo_description_private: row.description_private || '',
dateStart: row.date_start,
dateEnd: row.date_end,
usageCount: Number(row.usage_count || 0)
}));
res.json({ promos });
} catch (error) {
if (connection) {
try {
connection.destroy();
} catch (destroyError) {
console.error('Failed to destroy connection after error:', destroyError);
}
}
console.error('Error fetching promos:', error);
res.status(500).json({ error: 'Failed to fetch promos' });
}
});
router.post('/simulate', async (req, res) => {
const {
dateRange = {},
filters = {},
productPromo = {},
shippingPromo = {},
shippingTiers = [],
surcharges = [],
merchantFeePercent,
fixedCostPerOrder,
cogsCalculationMode = 'actual',
pointsConfig = {}
} = req.body || {};
const endDefault = DateTime.now();
const startDefault = endDefault.minus({ months: 6 });
const startDt = parseDate(dateRange.start, startDefault).startOf('day');
const endDt = parseDate(dateRange.end, endDefault).endOf('day');
const shipCountry = filters.shipCountry || 'US';
const rawPromoFilters = [
...(Array.isArray(filters.promoIds) ? filters.promoIds : []),
...(Array.isArray(filters.promoCodes) ? filters.promoCodes : []),
];
const promoCodes = Array.from(
new Set(
rawPromoFilters
.map((value) => {
if (typeof value === 'string') {
return value.trim();
}
if (typeof value === 'number') {
return String(value);
}
return '';
})
.filter((value) => value.length > 0)
)
);
const config = {
merchantFeePercent: typeof merchantFeePercent === 'number' ? merchantFeePercent : DEFAULTS.merchantFeePercent,
fixedCostPerOrder: typeof fixedCostPerOrder === 'number' ? fixedCostPerOrder : DEFAULTS.fixedCostPerOrder,
productPromo: {
type: productPromo.type || 'none',
value: Number(productPromo.value || 0),
minSubtotal: Number(productPromo.minSubtotal || 0)
},
shippingPromo: {
type: shippingPromo.type || 'none',
value: Number(shippingPromo.value || 0),
minSubtotal: Number(shippingPromo.minSubtotal || 0),
maxDiscount: Number(shippingPromo.maxDiscount || 0)
},
shippingTiers: Array.isArray(shippingTiers)
? shippingTiers
.map(tier => ({
threshold: Number(tier.threshold || 0),
mode: tier.mode === 'percentage' || tier.mode === 'flat' ? tier.mode : 'percentage',
value: Number(tier.value || 0)
}))
.filter(tier => tier.threshold >= 0 && tier.value >= 0)
.sort((a, b) => a.threshold - b.threshold)
: [],
surcharges: Array.isArray(surcharges)
? surcharges
.map(s => ({
threshold: Number(s.threshold || 0),
maxThreshold: typeof s.maxThreshold === 'number' && s.maxThreshold > 0 ? s.maxThreshold : null,
target: s.target === 'shipping' || s.target === 'order' ? s.target : 'shipping',
amount: Number(s.amount || 0)
}))
.filter(s => s.threshold >= 0 && s.amount >= 0)
.sort((a, b) => a.threshold - b.threshold)
: [],
points: {
pointsPerDollar: typeof pointsConfig.pointsPerDollar === 'number' ? pointsConfig.pointsPerDollar : null,
redemptionRate: typeof pointsConfig.redemptionRate === 'number' ? pointsConfig.redemptionRate : null,
pointDollarValue: typeof pointsConfig.pointDollarValue === 'number'
? pointsConfig.pointDollarValue
: DEFAULT_POINT_DOLLAR_VALUE
}
};
let connection;
let release;
try {
const dbConn = await getDbConnection();
connection = dbConn.connection;
release = dbConn.release;
const filteredOrdersParams = [
shipCountry,
formatDateForSql(startDt),
formatDateForSql(endDt)
];
const promoJoin = promoCodes.length > 0
? 'JOIN order_discounts od ON od.order_id = o.order_id AND od.discount_active = 1 AND od.discount_type = 10'
: '';
let promoFilterClause = '';
if (promoCodes.length > 0) {
const placeholders = promoCodes.map(() => '?').join(',');
promoFilterClause = `AND od.discount_code IN (${placeholders})`;
filteredOrdersParams.push(...promoCodes);
}
const filteredOrdersQuery = `
SELECT
o.order_id,
o.order_cid,
o.summary_subtotal,
o.summary_discount_subtotal,
o.summary_shipping,
o.ship_method_rate,
o.ship_method_cost,
o.summary_points,
${BUCKET_CASE} AS bucket_key
FROM _order o
${promoJoin}
WHERE o.summary_shipping > 0
AND o.summary_total > 0
AND o.order_status NOT IN (15)
AND o.ship_method_selected <> 'holdit'
AND o.ship_country = ?
AND o.date_placed BETWEEN ? AND ?
${promoFilterClause}
`;
const bucketParams = [
...filteredOrdersParams,
formatDateForSql(startDt),
formatDateForSql(endDt)
];
const bucketQuery = `
SELECT
f.bucket_key,
COUNT(*) AS order_count,
SUM(f.summary_subtotal) AS subtotal_sum,
SUM(f.summary_discount_subtotal) AS product_discount_sum,
SUM(f.summary_subtotal + f.summary_discount_subtotal) AS regular_subtotal_sum,
SUM(f.ship_method_rate) AS ship_rate_sum,
SUM(f.ship_method_cost) AS ship_cost_sum,
SUM(f.summary_points) AS points_awarded_sum,
SUM(COALESCE(p.points_redeemed, 0)) AS points_redeemed_sum,
SUM(COALESCE(c.total_cogs, 0)) AS cogs_sum,
AVG(f.summary_subtotal) AS avg_subtotal,
AVG(f.summary_discount_subtotal) AS avg_product_discount,
AVG(f.ship_method_rate) AS avg_ship_rate,
AVG(f.ship_method_cost) AS avg_ship_cost,
AVG(COALESCE(c.total_cogs, 0)) AS avg_cogs
FROM (
${filteredOrdersQuery}
) AS f
LEFT JOIN (
SELECT order_id, SUM(cogs_amount) AS total_cogs
FROM report_sales_data
WHERE action IN (1,2,3)
AND date_change BETWEEN ? AND ?
GROUP BY order_id
) AS c ON c.order_id = f.order_id
LEFT JOIN (
SELECT order_id, SUM(discount_amount) AS points_redeemed
FROM order_discounts
WHERE discount_type = 20 AND discount_active = 1
GROUP BY order_id
) AS p ON p.order_id = f.order_id
GROUP BY f.bucket_key
`;
const [rows] = await connection.execute(bucketQuery, bucketParams);
const totals = {
orders: 0,
subtotal: 0,
productDiscount: 0,
regularSubtotal: 0,
shipRate: 0,
shipCost: 0,
cogs: 0,
pointsAwarded: 0,
pointsRedeemed: 0
};
const rowMap = new Map();
for (const row of rows) {
const key = row.bucket_key || FINAL_BUCKET_KEY;
const parsed = {
orderCount: Number(row.order_count || 0),
subtotalSum: Number(row.subtotal_sum || 0),
productDiscountSum: Number(row.product_discount_sum || 0),
regularSubtotalSum: Number(row.regular_subtotal_sum || 0),
shipRateSum: Number(row.ship_rate_sum || 0),
shipCostSum: Number(row.ship_cost_sum || 0),
pointsAwardedSum: Number(row.points_awarded_sum || 0),
pointsRedeemedSum: Number(row.points_redeemed_sum || 0),
cogsSum: Number(row.cogs_sum || 0),
avgSubtotal: Number(row.avg_subtotal || 0),
avgProductDiscount: Number(row.avg_product_discount || 0),
avgShipRate: Number(row.avg_ship_rate || 0),
avgShipCost: Number(row.avg_ship_cost || 0),
avgCogs: Number(row.avg_cogs || 0)
};
rowMap.set(key, parsed);
totals.orders += parsed.orderCount;
totals.subtotal += parsed.subtotalSum;
totals.productDiscount += parsed.productDiscountSum;
totals.regularSubtotal += parsed.regularSubtotalSum;
totals.shipRate += parsed.shipRateSum;
totals.shipCost += parsed.shipCostSum;
totals.cogs += parsed.cogsSum;
totals.pointsAwarded += parsed.pointsAwardedSum;
totals.pointsRedeemed += parsed.pointsRedeemedSum;
}
const productDiscountRate = totals.regularSubtotal > 0
? totals.productDiscount / totals.regularSubtotal
: 0;
const pointsPerDollar = config.points.pointsPerDollar != null
? config.points.pointsPerDollar
: totals.subtotal > 0
? totals.pointsAwarded / totals.subtotal
: 0;
const pointDollarValue = config.points.pointDollarValue || DEFAULT_POINT_DOLLAR_VALUE;
// Calculate redemption rate using dollars redeemed from the matched order set
let calculatedRedemptionRate = 0;
if (config.points.redemptionRate != null) {
calculatedRedemptionRate = config.points.redemptionRate;
} else if (totals.pointsAwarded > 0 && pointDollarValue > 0) {
const totalRedeemedPoints = totals.pointsRedeemed / pointDollarValue;
if (totalRedeemedPoints > 0) {
calculatedRedemptionRate = Math.min(1, totalRedeemedPoints / totals.pointsAwarded);
}
}
const redemptionRate = calculatedRedemptionRate;
// Calculate overall average COGS percentage for 'average' mode
let overallCogsPercentage = 0;
if (cogsCalculationMode === 'average' && totals.subtotal > 0) {
overallCogsPercentage = totals.cogs / totals.subtotal;
}
const bucketResults = [];
let weightedProfitAmount = 0;
let weightedProfitPercent = 0;
for (const range of RANGE_DEFINITIONS) {
const data = rowMap.get(range.key) || {
orderCount: 0,
avgSubtotal: 0,
avgShipRate: 0,
avgShipCost: 0,
avgCogs: 0
};
const orderValue = data.avgSubtotal > 0 ? data.avgSubtotal : getMidpoint(range);
const shippingChargeBase = data.avgShipCost > 0 ? data.avgShipCost : 0;
const actualShippingCost = data.avgShipCost > 0 ? data.avgShipCost : 0;
// Calculate COGS based on the selected mode
let productCogs;
if (cogsCalculationMode === 'average') {
// Use overall average COGS percentage applied to this bucket's order value
productCogs = orderValue * overallCogsPercentage;
} else {
// Use actual COGS data from this bucket (existing behavior)
productCogs = data.avgCogs > 0 ? data.avgCogs : 0;
}
const productDiscountAmount = orderValue * productDiscountRate;
const effectiveRegularPrice = productDiscountRate < 0.99
? orderValue / (1 - productDiscountRate)
: orderValue;
let promoProductDiscount = 0;
if (config.productPromo.type === 'percentage_subtotal' && orderValue >= config.productPromo.minSubtotal) {
promoProductDiscount = Math.min(orderValue, (config.productPromo.value / 100) * orderValue);
} else if (config.productPromo.type === 'percentage_regular' && orderValue >= config.productPromo.minSubtotal) {
const targetRate = config.productPromo.value / 100;
const additionalRate = Math.max(0, targetRate - productDiscountRate);
promoProductDiscount = Math.min(orderValue, additionalRate * effectiveRegularPrice);
} else if (config.productPromo.type === 'fixed_amount' && orderValue >= config.productPromo.minSubtotal) {
promoProductDiscount = Math.min(orderValue, config.productPromo.value);
}
let shippingAfterAuto = shippingChargeBase;
for (const tier of config.shippingTiers) {
if (orderValue >= tier.threshold) {
if (tier.mode === 'percentage') {
shippingAfterAuto = shippingChargeBase * Math.max(0, 1 - tier.value / 100);
} else if (tier.mode === 'flat') {
shippingAfterAuto = tier.value;
}
}
}
let shipPromoDiscount = 0;
if (config.shippingPromo.type !== 'none' && orderValue >= config.shippingPromo.minSubtotal) {
if (config.shippingPromo.type === 'percentage') {
shipPromoDiscount = shippingAfterAuto * (config.shippingPromo.value / 100);
} else if (config.shippingPromo.type === 'fixed') {
shipPromoDiscount = config.shippingPromo.value;
}
if (config.shippingPromo.maxDiscount > 0) {
shipPromoDiscount = Math.min(shipPromoDiscount, config.shippingPromo.maxDiscount);
}
shipPromoDiscount = Math.min(shipPromoDiscount, shippingAfterAuto);
}
// Calculate surcharges
let shippingSurcharge = 0;
let orderSurcharge = 0;
for (const surcharge of config.surcharges) {
const meetsMin = orderValue >= surcharge.threshold;
const meetsMax = surcharge.maxThreshold == null || orderValue < surcharge.maxThreshold;
if (meetsMin && meetsMax) {
if (surcharge.target === 'shipping') {
shippingSurcharge += surcharge.amount;
} else if (surcharge.target === 'order') {
orderSurcharge += surcharge.amount;
}
}
}
const customerShipCost = Math.max(0, shippingAfterAuto - shipPromoDiscount + shippingSurcharge);
const customerItemCost = Math.max(0, orderValue - promoProductDiscount + orderSurcharge);
const totalRevenue = customerItemCost + customerShipCost;
const merchantFees = totalRevenue * (config.merchantFeePercent / 100);
const pointsCost = customerItemCost * pointsPerDollar * redemptionRate * pointDollarValue;
const fixedCosts = config.fixedCostPerOrder;
const totalCosts = productCogs + actualShippingCost + merchantFees + pointsCost + fixedCosts;
const profit = totalRevenue - totalCosts;
const profitPercent = totalRevenue > 0 ? (profit / totalRevenue) : 0;
const weight = totals.orders > 0 ? (data.orderCount || 0) / totals.orders : 0;
weightedProfitAmount += profit * weight;
weightedProfitPercent += profitPercent * weight;
bucketResults.push({
key: range.key,
label: range.label,
min: range.min,
max: range.max,
orderCount: data.orderCount || 0,
weight,
orderValue,
productDiscountAmount,
promoProductDiscount,
customerItemCost,
shippingChargeBase,
shippingAfterAuto,
shipPromoDiscount,
shippingSurcharge,
orderSurcharge,
customerShipCost,
actualShippingCost,
totalRevenue,
productCogs,
merchantFees,
pointsCost,
fixedCosts,
totalCosts,
profit,
profitPercent
});
}
if (release) {
release();
}
res.json({
dateRange: {
start: startDt.toISO(),
end: endDt.toISO()
},
totals: {
orders: totals.orders,
subtotal: totals.subtotal,
productDiscountRate,
pointsPerDollar,
redemptionRate,
pointDollarValue,
weightedProfitAmount,
weightedProfitPercent,
overallCogsPercentage: cogsCalculationMode === 'average' ? overallCogsPercentage : undefined
},
buckets: bucketResults
});
} catch (error) {
if (release) {
try {
release();
} catch (releaseError) {
console.error('Failed to release connection after error:', releaseError);
}
} else if (connection) {
try {
connection.destroy();
} catch (destroyError) {
console.error('Failed to destroy connection after error:', destroyError);
}
}
console.error('Error running discount simulation:', error);
res.status(500).json({ error: 'Failed to run discount simulation' });
}
});
module.exports = router;

View File

@@ -0,0 +1,683 @@
const express = require('express');
const { DateTime } = require('luxon');
const router = express.Router();
const { getDbConnection, getPoolStatus } = require('../db/connection');
const {
getTimeRangeConditions,
_internal: timeHelpers
} = require('../utils/timeUtils');
const TIMEZONE = 'America/New_York';
// Punch types from the database
const PUNCH_TYPES = {
OUT: 0,
IN: 1,
BREAK_START: 2,
BREAK_END: 3,
};
// Standard hours for FTE calculation (40 hours per week)
const STANDARD_WEEKLY_HOURS = 40;
/**
* Calculate working hours from timeclock entries
* Groups punches by employee and date, pairs in/out punches
* Returns both total hours (with breaks, for FTE) and productive hours (without breaks, for productivity)
*/
function calculateHoursFromPunches(punches) {
// Group by employee
const byEmployee = new Map();
punches.forEach(punch => {
if (!byEmployee.has(punch.EmployeeID)) {
byEmployee.set(punch.EmployeeID, []);
}
byEmployee.get(punch.EmployeeID).push(punch);
});
const employeeHours = [];
let totalHours = 0;
let totalBreakHours = 0;
byEmployee.forEach((employeePunches, employeeId) => {
// Sort by timestamp
employeePunches.sort((a, b) => new Date(a.TimeStamp) - new Date(b.TimeStamp));
let hours = 0;
let breakHours = 0;
let currentIn = null;
let breakStart = null;
employeePunches.forEach(punch => {
const punchTime = new Date(punch.TimeStamp);
switch (punch.PunchType) {
case PUNCH_TYPES.IN:
currentIn = punchTime;
break;
case PUNCH_TYPES.OUT:
if (currentIn) {
hours += (punchTime - currentIn) / (1000 * 60 * 60); // Convert ms to hours
currentIn = null;
}
break;
case PUNCH_TYPES.BREAK_START:
breakStart = punchTime;
break;
case PUNCH_TYPES.BREAK_END:
if (breakStart) {
breakHours += (punchTime - breakStart) / (1000 * 60 * 60);
breakStart = null;
}
break;
}
});
totalHours += hours;
totalBreakHours += breakHours;
employeeHours.push({
employeeId,
hours,
breakHours,
productiveHours: hours - breakHours,
});
});
return {
employeeHours,
totalHours,
totalBreakHours,
totalProductiveHours: totalHours - totalBreakHours
};
}
/**
* Calculate FTE (Full Time Equivalents) for a period
* @param {number} totalHours - Total hours worked
* @param {Date} startDate - Period start
* @param {Date} endDate - Period end
*/
function calculateFTE(totalHours, startDate, endDate) {
const start = new Date(startDate);
const end = new Date(endDate);
const days = Math.max(1, (end - start) / (1000 * 60 * 60 * 24));
const weeks = days / 7;
const expectedHours = weeks * STANDARD_WEEKLY_HOURS;
return expectedHours > 0 ? totalHours / expectedHours : 0;
}
// Main employee metrics endpoint
router.get('/', async (req, res) => {
const startTime = Date.now();
console.log(`[EMPLOYEE-METRICS] Starting request for timeRange: ${req.query.timeRange}`);
const timeoutPromise = new Promise((_, reject) => {
setTimeout(() => reject(new Error('Request timeout after 30 seconds')), 30000);
});
try {
const mainOperation = async () => {
const { timeRange, startDate, endDate } = req.query;
console.log(`[EMPLOYEE-METRICS] Getting DB connection...`);
const { connection, release } = await getDbConnection();
console.log(`[EMPLOYEE-METRICS] DB connection obtained in ${Date.now() - startTime}ms`);
const { whereClause, params, dateRange } = getTimeRangeConditions(timeRange, startDate, endDate);
// Adapt where clause for timeclock table (uses TimeStamp instead of date_placed)
const timeclockWhere = whereClause.replace(/date_placed/g, 'tc.TimeStamp');
// Query for timeclock data with employee names
const timeclockQuery = `
SELECT
tc.EmployeeID,
tc.TimeStamp,
tc.PunchType,
e.firstname,
e.lastname
FROM timeclock tc
LEFT JOIN employees e ON tc.EmployeeID = e.employeeid
WHERE ${timeclockWhere}
AND e.hidden = 0
AND e.disabled = 0
ORDER BY tc.EmployeeID, tc.TimeStamp
`;
const [timeclockRows] = await connection.execute(timeclockQuery, params);
// Calculate hours (includes both total hours for FTE and productive hours for productivity)
const { employeeHours, totalHours, totalBreakHours, totalProductiveHours } = calculateHoursFromPunches(timeclockRows);
// Get employee names for the results
const employeeNames = new Map();
timeclockRows.forEach(row => {
if (!employeeNames.has(row.EmployeeID)) {
employeeNames.set(row.EmployeeID, {
firstname: row.firstname || '',
lastname: row.lastname || '',
});
}
});
// Enrich employee hours with names
const enrichedEmployeeHours = employeeHours.map(eh => ({
...eh,
name: employeeNames.has(eh.employeeId)
? `${employeeNames.get(eh.employeeId).firstname} ${employeeNames.get(eh.employeeId).lastname}`.trim()
: `Employee ${eh.employeeId}`,
})).sort((a, b) => b.hours - a.hours);
// Query for picking tickets - using subquery to avoid duplication from bucket join
// Ship-together orders: only count main orders (is_sub = 0 or NULL), not sub-orders
const pickingWhere = whereClause.replace(/date_placed/g, 'pt.createddate');
// First get picking ticket stats without the bucket join (to avoid duplication)
const pickingStatsQuery = `
SELECT
pt.createdby as employeeId,
e.firstname,
e.lastname,
COUNT(DISTINCT pt.pickingid) as ticketCount,
SUM(pt.totalpieces_picked) as piecesPicked,
SUM(TIMESTAMPDIFF(SECOND, pt.createddate, pt.closeddate)) as pickingTimeSeconds,
AVG(NULLIF(pt.picking_speed, 0)) as avgPickingSpeed
FROM picking_ticket pt
LEFT JOIN employees e ON pt.createdby = e.employeeid
WHERE ${pickingWhere}
AND pt.closeddate IS NOT NULL
GROUP BY pt.createdby, e.firstname, e.lastname
`;
// Separate query for order counts (needs bucket join for ship-together handling)
const orderCountQuery = `
SELECT
pt.createdby as employeeId,
COUNT(DISTINCT CASE WHEN ptb.is_sub = 0 OR ptb.is_sub IS NULL THEN ptb.orderid END) as ordersPicked
FROM picking_ticket pt
LEFT JOIN picking_ticket_buckets ptb ON pt.pickingid = ptb.pickingid
WHERE ${pickingWhere}
AND pt.closeddate IS NOT NULL
GROUP BY pt.createdby
`;
const [[pickingStatsRows], [orderCountRows]] = await Promise.all([
connection.execute(pickingStatsQuery, params),
connection.execute(orderCountQuery, params)
]);
// Merge the results
const orderCountMap = new Map();
orderCountRows.forEach(row => {
orderCountMap.set(row.employeeId, parseInt(row.ordersPicked || 0));
});
// Aggregate picking totals
let totalOrdersPicked = 0;
let totalPiecesPicked = 0;
let totalTickets = 0;
let totalPickingTimeSeconds = 0;
let pickingSpeedSum = 0;
let pickingSpeedCount = 0;
const pickingByEmployee = pickingStatsRows.map(row => {
const ordersPicked = orderCountMap.get(row.employeeId) || 0;
totalOrdersPicked += ordersPicked;
totalPiecesPicked += parseInt(row.piecesPicked || 0);
totalTickets += parseInt(row.ticketCount || 0);
totalPickingTimeSeconds += parseInt(row.pickingTimeSeconds || 0);
if (row.avgPickingSpeed && row.avgPickingSpeed > 0) {
pickingSpeedSum += parseFloat(row.avgPickingSpeed);
pickingSpeedCount++;
}
const empPickingHours = parseInt(row.pickingTimeSeconds || 0) / 3600;
return {
employeeId: row.employeeId,
name: `${row.firstname || ''} ${row.lastname || ''}`.trim() || `Employee ${row.employeeId}`,
ticketCount: parseInt(row.ticketCount || 0),
ordersPicked,
piecesPicked: parseInt(row.piecesPicked || 0),
pickingHours: empPickingHours,
avgPickingSpeed: row.avgPickingSpeed ? parseFloat(row.avgPickingSpeed) : null,
};
});
const totalPickingHours = totalPickingTimeSeconds / 3600;
const avgPickingSpeed = pickingSpeedCount > 0 ? pickingSpeedSum / pickingSpeedCount : 0;
// Query for shipped orders - totals
// Ship-together orders: only count main orders (order_type != 8 for sub-orders, or use parent tracking)
const shippingWhere = whereClause.replace(/date_placed/g, 'o.date_shipped');
const shippingQuery = `
SELECT
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
WHERE ${shippingWhere}
AND o.order_status IN (100, 92)
`;
const [shippingRows] = await connection.execute(shippingQuery, params);
const shipping = shippingRows[0] || { ordersShipped: 0, piecesShipped: 0 };
// Query for shipped orders by employee
const shippingByEmployeeQuery = `
SELECT
e.employeeid,
e.firstname,
e.lastname,
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
JOIN employees e ON o.stats_cid_shipped = e.cid
WHERE ${shippingWhere}
AND o.order_status IN (100, 92)
AND e.hidden = 0
AND e.disabled = 0
GROUP BY e.employeeid, e.firstname, e.lastname
ORDER BY ordersShipped DESC
`;
const [shippingByEmployeeRows] = await connection.execute(shippingByEmployeeQuery, params);
const shippingByEmployee = shippingByEmployeeRows.map(row => ({
employeeId: row.employeeid,
name: `${row.firstname || ''} ${row.lastname || ''}`.trim() || `Employee ${row.employeeid}`,
ordersShipped: parseInt(row.ordersShipped || 0),
piecesShipped: parseInt(row.piecesShipped || 0),
}));
// Calculate period dates for FTE calculation
let periodStart, periodEnd;
if (dateRange?.start) {
periodStart = new Date(dateRange.start);
} else if (params[0]) {
periodStart = new Date(params[0]);
} else {
periodStart = new Date();
periodStart.setDate(periodStart.getDate() - 30);
}
if (dateRange?.end) {
periodEnd = new Date(dateRange.end);
} else if (params[1]) {
periodEnd = new Date(params[1]);
} else {
periodEnd = new Date();
}
const fte = calculateFTE(totalHours, periodStart, periodEnd);
const activeEmployees = enrichedEmployeeHours.filter(e => e.hours > 0).length;
// Calculate weeks in period for weekly averages
const periodDays = Math.max(1, (periodEnd - periodStart) / (1000 * 60 * 60 * 24));
const weeksInPeriod = periodDays / 7;
// Get daily trend data for hours
// Use DATE_FORMAT to get date string in Eastern timezone, avoiding JS timezone conversion issues
// Business day starts at 1 AM, so subtract 1 hour before taking the date
const trendWhere = whereClause.replace(/date_placed/g, 'tc.TimeStamp');
const trendQuery = `
SELECT
DATE_FORMAT(DATE_SUB(tc.TimeStamp, INTERVAL 1 HOUR), '%Y-%m-%d') as date,
tc.EmployeeID,
tc.TimeStamp,
tc.PunchType
FROM timeclock tc
LEFT JOIN employees e ON tc.EmployeeID = e.employeeid
WHERE ${trendWhere}
AND e.hidden = 0
AND e.disabled = 0
ORDER BY date, tc.EmployeeID, tc.TimeStamp
`;
const [trendRows] = await connection.execute(trendQuery, params);
// Get daily picking data for trend
// Ship-together orders: only count main orders (is_sub = 0 or NULL)
// Use DATE_FORMAT for consistent date string format
const pickingTrendWhere = whereClause.replace(/date_placed/g, 'pt.createddate');
const pickingTrendQuery = `
SELECT
DATE_FORMAT(DATE_SUB(pt.createddate, INTERVAL 1 HOUR), '%Y-%m-%d') as date,
COUNT(DISTINCT CASE WHEN ptb.is_sub = 0 OR ptb.is_sub IS NULL THEN ptb.orderid END) as ordersPicked,
COALESCE(SUM(pt.totalpieces_picked), 0) as piecesPicked
FROM picking_ticket pt
LEFT JOIN picking_ticket_buckets ptb ON pt.pickingid = ptb.pickingid
WHERE ${pickingTrendWhere}
AND pt.closeddate IS NOT NULL
GROUP BY DATE_FORMAT(DATE_SUB(pt.createddate, INTERVAL 1 HOUR), '%Y-%m-%d')
ORDER BY date
`;
const [pickingTrendRows] = await connection.execute(pickingTrendQuery, params);
// Create a map of picking data by date
const pickingByDate = new Map();
pickingTrendRows.forEach(row => {
// Date is already a string in YYYY-MM-DD format from DATE_FORMAT
const date = String(row.date);
pickingByDate.set(date, {
ordersPicked: parseInt(row.ordersPicked || 0),
piecesPicked: parseInt(row.piecesPicked || 0),
});
});
// Group timeclock by date for trend
const byDate = new Map();
trendRows.forEach(row => {
// Date is already a string in YYYY-MM-DD format from DATE_FORMAT
const date = String(row.date);
if (!byDate.has(date)) {
byDate.set(date, []);
}
byDate.get(date).push(row);
});
// Generate all dates in the period range for complete trend data
const allDatesInRange = [];
const startDt = DateTime.fromJSDate(periodStart).setZone(TIMEZONE).startOf('day');
const endDt = DateTime.fromJSDate(periodEnd).setZone(TIMEZONE).startOf('day');
let currentDt = startDt;
while (currentDt <= endDt) {
allDatesInRange.push(currentDt.toFormat('yyyy-MM-dd'));
currentDt = currentDt.plus({ days: 1 });
}
// Build trend data for all dates in range, filling zeros for missing days
const trend = allDatesInRange.map(date => {
const punches = byDate.get(date) || [];
const { totalHours: dayHours, employeeHours: dayEmployeeHours } = calculateHoursFromPunches(punches);
const picking = pickingByDate.get(date) || { ordersPicked: 0, piecesPicked: 0 };
// Parse date string in Eastern timezone to get proper ISO timestamp
const dateDt = DateTime.fromFormat(date, 'yyyy-MM-dd', { zone: TIMEZONE });
return {
date,
timestamp: dateDt.toISO(),
hours: dayHours,
activeEmployees: dayEmployeeHours.filter(e => e.hours > 0).length,
ordersPicked: picking.ordersPicked,
piecesPicked: picking.piecesPicked,
};
});
// Get previous period data for comparison
const previousRange = getPreviousPeriodRange(timeRange, startDate, endDate);
let comparison = null;
let previousTotals = null;
if (previousRange) {
const prevTimeclockWhere = previousRange.whereClause.replace(/date_placed/g, 'tc.TimeStamp');
const [prevTimeclockRows] = await connection.execute(
`SELECT tc.EmployeeID, tc.TimeStamp, tc.PunchType
FROM timeclock tc
LEFT JOIN employees e ON tc.EmployeeID = e.employeeid
WHERE ${prevTimeclockWhere}
AND e.hidden = 0
AND e.disabled = 0
ORDER BY tc.EmployeeID, tc.TimeStamp`,
previousRange.params
);
const {
totalHours: prevTotalHours,
totalProductiveHours: prevProductiveHours,
employeeHours: prevEmployeeHours
} = calculateHoursFromPunches(prevTimeclockRows);
const prevActiveEmployees = prevEmployeeHours.filter(e => e.hours > 0).length;
// Previous picking data (ship-together fix applied)
// Use separate queries to avoid duplication from bucket join
const prevPickingWhere = previousRange.whereClause.replace(/date_placed/g, 'pt.createddate');
const [[prevPickingStatsRows], [prevOrderCountRows]] = await Promise.all([
connection.execute(
`SELECT
SUM(pt.totalpieces_picked) as piecesPicked,
SUM(TIMESTAMPDIFF(SECOND, pt.createddate, pt.closeddate)) as pickingTimeSeconds
FROM picking_ticket pt
WHERE ${prevPickingWhere}
AND pt.closeddate IS NOT NULL`,
previousRange.params
),
connection.execute(
`SELECT
COUNT(DISTINCT CASE WHEN ptb.is_sub = 0 OR ptb.is_sub IS NULL THEN ptb.orderid END) as ordersPicked
FROM picking_ticket pt
LEFT JOIN picking_ticket_buckets ptb ON pt.pickingid = ptb.pickingid
WHERE ${prevPickingWhere}
AND pt.closeddate IS NOT NULL`,
previousRange.params
)
]);
const prevPickingStats = prevPickingStatsRows[0] || { piecesPicked: 0, pickingTimeSeconds: 0 };
const prevOrderCount = prevOrderCountRows[0] || { ordersPicked: 0 };
const prevPicking = {
ordersPicked: parseInt(prevOrderCount.ordersPicked || 0),
piecesPicked: parseInt(prevPickingStats.piecesPicked || 0),
pickingTimeSeconds: parseInt(prevPickingStats.pickingTimeSeconds || 0)
};
const prevPickingHours = prevPicking.pickingTimeSeconds / 3600;
// Previous shipping data
const prevShippingWhere = previousRange.whereClause.replace(/date_placed/g, 'o.date_shipped');
const [prevShippingRows] = await connection.execute(
`SELECT
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
WHERE ${prevShippingWhere}
AND o.order_status IN (100, 92)`,
previousRange.params
);
const prevShipping = prevShippingRows[0] || { ordersShipped: 0, piecesShipped: 0 };
// Calculate previous period FTE and productivity
const prevFte = calculateFTE(prevTotalHours, previousRange.start || periodStart, previousRange.end || periodEnd);
const prevOrdersPerHour = prevProductiveHours > 0 ? parseInt(prevPicking.ordersPicked || 0) / prevProductiveHours : 0;
const prevPiecesPerHour = prevProductiveHours > 0 ? parseInt(prevPicking.piecesPicked || 0) / prevProductiveHours : 0;
previousTotals = {
hours: prevTotalHours,
productiveHours: prevProductiveHours,
activeEmployees: prevActiveEmployees,
fte: prevFte,
ordersPicked: parseInt(prevPicking.ordersPicked || 0),
piecesPicked: parseInt(prevPicking.piecesPicked || 0),
pickingHours: prevPickingHours,
ordersShipped: parseInt(prevShipping.ordersShipped || 0),
piecesShipped: parseInt(prevShipping.piecesShipped || 0),
ordersPerHour: prevOrdersPerHour,
piecesPerHour: prevPiecesPerHour,
};
// Calculate productivity metrics for comparison
const currentOrdersPerHour = totalProductiveHours > 0 ? totalOrdersPicked / totalProductiveHours : 0;
const currentPiecesPerHour = totalProductiveHours > 0 ? totalPiecesPicked / totalProductiveHours : 0;
comparison = {
hours: calculateComparison(totalHours, prevTotalHours),
productiveHours: calculateComparison(totalProductiveHours, prevProductiveHours),
activeEmployees: calculateComparison(activeEmployees, prevActiveEmployees),
fte: calculateComparison(fte, prevFte),
ordersPicked: calculateComparison(totalOrdersPicked, parseInt(prevPicking.ordersPicked || 0)),
piecesPicked: calculateComparison(totalPiecesPicked, parseInt(prevPicking.piecesPicked || 0)),
ordersShipped: calculateComparison(parseInt(shipping.ordersShipped || 0), parseInt(prevShipping.ordersShipped || 0)),
piecesShipped: calculateComparison(parseInt(shipping.piecesShipped || 0), parseInt(prevShipping.piecesShipped || 0)),
ordersPerHour: calculateComparison(currentOrdersPerHour, prevOrdersPerHour),
piecesPerHour: calculateComparison(currentPiecesPerHour, prevPiecesPerHour),
};
}
// Calculate efficiency (picking time vs productive hours)
const pickingEfficiency = totalProductiveHours > 0 ? (totalPickingHours / totalProductiveHours) * 100 : 0;
const response = {
dateRange,
totals: {
// Time metrics
hours: totalHours,
breakHours: totalBreakHours,
productiveHours: totalProductiveHours,
pickingHours: totalPickingHours,
// Employee metrics
activeEmployees,
fte,
weeksInPeriod,
// Picking metrics
ordersPicked: totalOrdersPicked,
piecesPicked: totalPiecesPicked,
ticketCount: totalTickets,
// Shipping metrics
ordersShipped: parseInt(shipping.ordersShipped || 0),
piecesShipped: parseInt(shipping.piecesShipped || 0),
// Calculated metrics - standardized to weekly
hoursPerWeek: weeksInPeriod > 0 ? totalHours / weeksInPeriod : 0,
hoursPerEmployeePerWeek: activeEmployees > 0 && weeksInPeriod > 0
? (totalHours / activeEmployees) / weeksInPeriod
: 0,
// Productivity metrics (uses productive hours - excludes breaks)
ordersPerHour: totalProductiveHours > 0 ? totalOrdersPicked / totalProductiveHours : 0,
piecesPerHour: totalProductiveHours > 0 ? totalPiecesPicked / totalProductiveHours : 0,
// Picking speed from database (more accurate, only counts picking time)
avgPickingSpeed,
// Efficiency metrics
pickingEfficiency,
},
previousTotals,
comparison,
byEmployee: {
hours: enrichedEmployeeHours,
picking: pickingByEmployee,
shipping: shippingByEmployee,
},
trend,
};
return { response, release };
};
let result;
try {
result = await Promise.race([mainOperation(), timeoutPromise]);
} catch (error) {
if (error.message.includes('timeout')) {
console.log(`[EMPLOYEE-METRICS] Request timed out in ${Date.now() - startTime}ms`);
throw error;
}
throw error;
}
const { response, release } = result;
if (release) release();
console.log(`[EMPLOYEE-METRICS] Request completed in ${Date.now() - startTime}ms`);
res.json(response);
} catch (error) {
console.error('Error in /employee-metrics:', error);
console.log(`[EMPLOYEE-METRICS] Request failed in ${Date.now() - startTime}ms`);
res.status(500).json({ error: error.message });
}
});
// Health check
router.get('/health', async (req, res) => {
try {
const { connection, release } = await getDbConnection();
await connection.execute('SELECT 1 as test');
release();
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
pool: getPoolStatus(),
});
} catch (error) {
res.status(500).json({
status: 'unhealthy',
timestamp: new Date().toISOString(),
error: error.message,
});
}
});
// Helper functions
function calculateComparison(currentValue, previousValue) {
if (typeof previousValue !== 'number') {
return { absolute: null, percentage: null };
}
const absolute = typeof currentValue === 'number' ? currentValue - previousValue : null;
const percentage =
absolute !== null && previousValue !== 0
? (absolute / Math.abs(previousValue)) * 100
: null;
return { absolute, percentage };
}
function getPreviousPeriodRange(timeRange, startDate, endDate) {
if (timeRange && timeRange !== 'custom') {
const prevTimeRange = getPreviousTimeRange(timeRange);
if (!prevTimeRange || prevTimeRange === timeRange) {
return null;
}
return getTimeRangeConditions(prevTimeRange);
}
const hasCustomDates = (timeRange === 'custom' || !timeRange) && startDate && endDate;
if (!hasCustomDates) {
return null;
}
const start = new Date(startDate);
const end = new Date(endDate);
if (Number.isNaN(start.getTime()) || Number.isNaN(end.getTime())) {
return null;
}
const duration = end.getTime() - start.getTime();
if (!Number.isFinite(duration) || duration <= 0) {
return null;
}
const prevEnd = new Date(start.getTime() - 1);
const prevStart = new Date(prevEnd.getTime() - duration);
return getTimeRangeConditions('custom', prevStart.toISOString(), prevEnd.toISOString());
}
function getPreviousTimeRange(timeRange) {
const map = {
today: 'yesterday',
thisWeek: 'lastWeek',
thisMonth: 'lastMonth',
last7days: 'previous7days',
last30days: 'previous30days',
last90days: 'previous90days',
yesterday: 'twoDaysAgo'
};
return map[timeRange] || timeRange;
}
module.exports = router;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,484 @@
const express = require('express');
const { DateTime } = require('luxon');
const router = express.Router();
const { getDbConnection, getPoolStatus } = require('../db/connection');
const {
getTimeRangeConditions,
} = require('../utils/timeUtils');
const TIMEZONE = 'America/New_York';
// Main operations metrics endpoint - focused on picking and shipping
router.get('/', async (req, res) => {
const startTime = Date.now();
console.log(`[OPERATIONS-METRICS] Starting request for timeRange: ${req.query.timeRange}`);
const timeoutPromise = new Promise((_, reject) => {
setTimeout(() => reject(new Error('Request timeout after 30 seconds')), 30000);
});
try {
const mainOperation = async () => {
const { timeRange, startDate, endDate } = req.query;
console.log(`[OPERATIONS-METRICS] Getting DB connection...`);
const { connection, release } = await getDbConnection();
console.log(`[OPERATIONS-METRICS] DB connection obtained in ${Date.now() - startTime}ms`);
const { whereClause, params, dateRange } = getTimeRangeConditions(timeRange, startDate, endDate);
// Query for picking tickets - using subquery to avoid duplication from bucket join
// Ship-together orders: only count main orders (is_sub = 0 or NULL), not sub-orders
const pickingWhere = whereClause.replace(/date_placed/g, 'pt.createddate');
// First get picking ticket stats without the bucket join (to avoid duplication)
const pickingStatsQuery = `
SELECT
pt.createdby as employeeId,
e.firstname,
e.lastname,
COUNT(DISTINCT pt.pickingid) as ticketCount,
SUM(pt.totalpieces_picked) as piecesPicked,
SUM(TIMESTAMPDIFF(SECOND, pt.createddate, pt.closeddate)) as pickingTimeSeconds,
AVG(NULLIF(pt.picking_speed, 0)) as avgPickingSpeed
FROM picking_ticket pt
LEFT JOIN employees e ON pt.createdby = e.employeeid
WHERE ${pickingWhere}
AND pt.closeddate IS NOT NULL
GROUP BY pt.createdby, e.firstname, e.lastname
`;
// Separate query for order counts (needs bucket join for ship-together handling)
const orderCountQuery = `
SELECT
pt.createdby as employeeId,
COUNT(DISTINCT CASE WHEN ptb.is_sub = 0 OR ptb.is_sub IS NULL THEN ptb.orderid END) as ordersPicked
FROM picking_ticket pt
LEFT JOIN picking_ticket_buckets ptb ON pt.pickingid = ptb.pickingid
WHERE ${pickingWhere}
AND pt.closeddate IS NOT NULL
GROUP BY pt.createdby
`;
const [[pickingStatsRows], [orderCountRows]] = await Promise.all([
connection.execute(pickingStatsQuery, params),
connection.execute(orderCountQuery, params)
]);
// Merge the results
const orderCountMap = new Map();
orderCountRows.forEach(row => {
orderCountMap.set(row.employeeId, parseInt(row.ordersPicked || 0));
});
// Aggregate picking totals
let totalOrdersPicked = 0;
let totalPiecesPicked = 0;
let totalTickets = 0;
let totalPickingTimeSeconds = 0;
let pickingSpeedSum = 0;
let pickingSpeedCount = 0;
const pickingByEmployee = pickingStatsRows.map(row => {
const ordersPicked = orderCountMap.get(row.employeeId) || 0;
totalOrdersPicked += ordersPicked;
totalPiecesPicked += parseInt(row.piecesPicked || 0);
totalTickets += parseInt(row.ticketCount || 0);
totalPickingTimeSeconds += parseInt(row.pickingTimeSeconds || 0);
if (row.avgPickingSpeed && row.avgPickingSpeed > 0) {
pickingSpeedSum += parseFloat(row.avgPickingSpeed);
pickingSpeedCount++;
}
const empPickingHours = parseInt(row.pickingTimeSeconds || 0) / 3600;
return {
employeeId: row.employeeId,
name: `${row.firstname || ''} ${row.lastname || ''}`.trim() || `Employee ${row.employeeId}`,
ticketCount: parseInt(row.ticketCount || 0),
ordersPicked,
piecesPicked: parseInt(row.piecesPicked || 0),
pickingHours: empPickingHours,
avgPickingSpeed: row.avgPickingSpeed ? parseFloat(row.avgPickingSpeed) : null,
};
});
const totalPickingHours = totalPickingTimeSeconds / 3600;
const avgPickingSpeed = pickingSpeedCount > 0 ? pickingSpeedSum / pickingSpeedCount : 0;
// Query for shipped orders - totals
// Ship-together orders: only count main orders (order_type != 8 for sub-orders)
const shippingWhere = whereClause.replace(/date_placed/g, 'o.date_shipped');
const shippingQuery = `
SELECT
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
WHERE ${shippingWhere}
AND o.order_status IN (100, 92)
`;
const [shippingRows] = await connection.execute(shippingQuery, params);
const shipping = shippingRows[0] || { ordersShipped: 0, piecesShipped: 0 };
// Query for shipped orders by employee
const shippingByEmployeeQuery = `
SELECT
e.employeeid,
e.firstname,
e.lastname,
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
JOIN employees e ON o.stats_cid_shipped = e.cid
WHERE ${shippingWhere}
AND o.order_status IN (100, 92)
AND e.hidden = 0
AND e.disabled = 0
GROUP BY e.employeeid, e.firstname, e.lastname
ORDER BY ordersShipped DESC
`;
const [shippingByEmployeeRows] = await connection.execute(shippingByEmployeeQuery, params);
const shippingByEmployee = shippingByEmployeeRows.map(row => ({
employeeId: row.employeeid,
name: `${row.firstname || ''} ${row.lastname || ''}`.trim() || `Employee ${row.employeeid}`,
ordersShipped: parseInt(row.ordersShipped || 0),
piecesShipped: parseInt(row.piecesShipped || 0),
}));
// Calculate period dates
let periodStart, periodEnd;
if (dateRange?.start) {
periodStart = new Date(dateRange.start);
} else if (params[0]) {
periodStart = new Date(params[0]);
} else {
periodStart = new Date();
periodStart.setDate(periodStart.getDate() - 30);
}
if (dateRange?.end) {
periodEnd = new Date(dateRange.end);
} else if (params[1]) {
periodEnd = new Date(params[1]);
} else {
periodEnd = new Date();
}
// Calculate productivity (orders/pieces per picking hour)
const ordersPerHour = totalPickingHours > 0 ? totalOrdersPicked / totalPickingHours : 0;
const piecesPerHour = totalPickingHours > 0 ? totalPiecesPicked / totalPickingHours : 0;
// Get daily trend data for picking
// Use DATE_FORMAT to get date string in Eastern timezone
// Business day starts at 1 AM, so subtract 1 hour before taking the date
const pickingTrendWhere = whereClause.replace(/date_placed/g, 'pt.createddate');
const pickingTrendQuery = `
SELECT
pt_agg.date,
COALESCE(order_counts.ordersPicked, 0) as ordersPicked,
pt_agg.piecesPicked
FROM (
SELECT
DATE_FORMAT(DATE_SUB(pt.createddate, INTERVAL 1 HOUR), '%Y-%m-%d') as date,
COALESCE(SUM(pt.totalpieces_picked), 0) as piecesPicked
FROM picking_ticket pt
WHERE ${pickingTrendWhere}
AND pt.closeddate IS NOT NULL
GROUP BY DATE_FORMAT(DATE_SUB(pt.createddate, INTERVAL 1 HOUR), '%Y-%m-%d')
) pt_agg
LEFT JOIN (
SELECT
DATE_FORMAT(DATE_SUB(pt.createddate, INTERVAL 1 HOUR), '%Y-%m-%d') as date,
COUNT(DISTINCT CASE WHEN ptb.is_sub = 0 OR ptb.is_sub IS NULL THEN ptb.orderid END) as ordersPicked
FROM picking_ticket pt
LEFT JOIN picking_ticket_buckets ptb ON pt.pickingid = ptb.pickingid
WHERE ${pickingTrendWhere}
AND pt.closeddate IS NOT NULL
GROUP BY DATE_FORMAT(DATE_SUB(pt.createddate, INTERVAL 1 HOUR), '%Y-%m-%d')
) order_counts ON pt_agg.date = order_counts.date
ORDER BY pt_agg.date
`;
// Get shipping trend data
const shippingTrendWhere = whereClause.replace(/date_placed/g, 'o.date_shipped');
const shippingTrendQuery = `
SELECT
DATE_FORMAT(DATE_SUB(o.date_shipped, INTERVAL 1 HOUR), '%Y-%m-%d') as date,
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
WHERE ${shippingTrendWhere}
AND o.order_status IN (100, 92)
GROUP BY DATE_FORMAT(DATE_SUB(o.date_shipped, INTERVAL 1 HOUR), '%Y-%m-%d')
ORDER BY date
`;
const [[pickingTrendRows], [shippingTrendRows]] = await Promise.all([
connection.execute(pickingTrendQuery, [...params, ...params]),
connection.execute(shippingTrendQuery, params),
]);
// Create maps for trend data
const pickingByDate = new Map();
pickingTrendRows.forEach(row => {
const date = String(row.date);
pickingByDate.set(date, {
ordersPicked: parseInt(row.ordersPicked || 0),
piecesPicked: parseInt(row.piecesPicked || 0),
});
});
const shippingByDate = new Map();
shippingTrendRows.forEach(row => {
const date = String(row.date);
shippingByDate.set(date, {
ordersShipped: parseInt(row.ordersShipped || 0),
piecesShipped: parseInt(row.piecesShipped || 0),
});
});
// Generate all dates in the period range for complete trend data
const allDatesInRange = [];
const startDt = DateTime.fromJSDate(periodStart).setZone(TIMEZONE).startOf('day');
const endDt = DateTime.fromJSDate(periodEnd).setZone(TIMEZONE).startOf('day');
let currentDt = startDt;
while (currentDt <= endDt) {
allDatesInRange.push(currentDt.toFormat('yyyy-MM-dd'));
currentDt = currentDt.plus({ days: 1 });
}
// Build trend data for all dates in range
const trend = allDatesInRange.map(date => {
const picking = pickingByDate.get(date) || { ordersPicked: 0, piecesPicked: 0 };
const shippingData = shippingByDate.get(date) || { ordersShipped: 0, piecesShipped: 0 };
// Parse date string in Eastern timezone to get proper ISO timestamp
const dateDt = DateTime.fromFormat(date, 'yyyy-MM-dd', { zone: TIMEZONE });
return {
date,
timestamp: dateDt.toISO(),
ordersPicked: picking.ordersPicked,
piecesPicked: picking.piecesPicked,
ordersShipped: shippingData.ordersShipped,
piecesShipped: shippingData.piecesShipped,
};
});
// Get previous period data for comparison
const previousRange = getPreviousPeriodRange(timeRange, startDate, endDate);
let comparison = null;
let previousTotals = null;
if (previousRange) {
// Previous picking data
const prevPickingWhere = previousRange.whereClause.replace(/date_placed/g, 'pt.createddate');
const [[prevPickingStatsRows], [prevOrderCountRows]] = await Promise.all([
connection.execute(
`SELECT
SUM(pt.totalpieces_picked) as piecesPicked,
SUM(TIMESTAMPDIFF(SECOND, pt.createddate, pt.closeddate)) as pickingTimeSeconds
FROM picking_ticket pt
WHERE ${prevPickingWhere}
AND pt.closeddate IS NOT NULL`,
previousRange.params
),
connection.execute(
`SELECT
COUNT(DISTINCT CASE WHEN ptb.is_sub = 0 OR ptb.is_sub IS NULL THEN ptb.orderid END) as ordersPicked
FROM picking_ticket pt
LEFT JOIN picking_ticket_buckets ptb ON pt.pickingid = ptb.pickingid
WHERE ${prevPickingWhere}
AND pt.closeddate IS NOT NULL`,
previousRange.params
)
]);
const prevPickingStats = prevPickingStatsRows[0] || { piecesPicked: 0, pickingTimeSeconds: 0 };
const prevOrderCount = prevOrderCountRows[0] || { ordersPicked: 0 };
const prevPicking = {
ordersPicked: parseInt(prevOrderCount.ordersPicked || 0),
piecesPicked: parseInt(prevPickingStats.piecesPicked || 0),
pickingTimeSeconds: parseInt(prevPickingStats.pickingTimeSeconds || 0)
};
const prevPickingHours = prevPicking.pickingTimeSeconds / 3600;
// Previous shipping data
const prevShippingWhere = previousRange.whereClause.replace(/date_placed/g, 'o.date_shipped');
const [prevShippingRows] = await connection.execute(
`SELECT
COUNT(DISTINCT CASE WHEN o.order_type != 8 OR o.order_type IS NULL THEN o.order_id END) as ordersShipped,
COALESCE(SUM(o.stats_prod_pieces), 0) as piecesShipped
FROM _order o
WHERE ${prevShippingWhere}
AND o.order_status IN (100, 92)`,
previousRange.params
);
const prevShipping = prevShippingRows[0] || { ordersShipped: 0, piecesShipped: 0 };
// Calculate previous productivity
const prevOrdersPerHour = prevPickingHours > 0 ? parseInt(prevPicking.ordersPicked || 0) / prevPickingHours : 0;
const prevPiecesPerHour = prevPickingHours > 0 ? parseInt(prevPicking.piecesPicked || 0) / prevPickingHours : 0;
previousTotals = {
ordersPicked: parseInt(prevPicking.ordersPicked || 0),
piecesPicked: parseInt(prevPicking.piecesPicked || 0),
pickingHours: prevPickingHours,
ordersShipped: parseInt(prevShipping.ordersShipped || 0),
piecesShipped: parseInt(prevShipping.piecesShipped || 0),
ordersPerHour: prevOrdersPerHour,
piecesPerHour: prevPiecesPerHour,
};
comparison = {
ordersPicked: calculateComparison(totalOrdersPicked, parseInt(prevPicking.ordersPicked || 0)),
piecesPicked: calculateComparison(totalPiecesPicked, parseInt(prevPicking.piecesPicked || 0)),
ordersShipped: calculateComparison(parseInt(shipping.ordersShipped || 0), parseInt(prevShipping.ordersShipped || 0)),
piecesShipped: calculateComparison(parseInt(shipping.piecesShipped || 0), parseInt(prevShipping.piecesShipped || 0)),
ordersPerHour: calculateComparison(ordersPerHour, prevOrdersPerHour),
piecesPerHour: calculateComparison(piecesPerHour, prevPiecesPerHour),
};
}
const response = {
dateRange,
totals: {
// Picking metrics
ordersPicked: totalOrdersPicked,
piecesPicked: totalPiecesPicked,
ticketCount: totalTickets,
pickingHours: totalPickingHours,
// Shipping metrics
ordersShipped: parseInt(shipping.ordersShipped || 0),
piecesShipped: parseInt(shipping.piecesShipped || 0),
// Productivity metrics
ordersPerHour,
piecesPerHour,
avgPickingSpeed,
},
previousTotals,
comparison,
byEmployee: {
picking: pickingByEmployee,
shipping: shippingByEmployee,
},
trend,
};
return { response, release };
};
let result;
try {
result = await Promise.race([mainOperation(), timeoutPromise]);
} catch (error) {
if (error.message.includes('timeout')) {
console.log(`[OPERATIONS-METRICS] Request timed out in ${Date.now() - startTime}ms`);
throw error;
}
throw error;
}
const { response, release } = result;
if (release) release();
console.log(`[OPERATIONS-METRICS] Request completed in ${Date.now() - startTime}ms`);
res.json(response);
} catch (error) {
console.error('Error in /operations-metrics:', error);
console.log(`[OPERATIONS-METRICS] Request failed in ${Date.now() - startTime}ms`);
res.status(500).json({ error: error.message });
}
});
// Health check
router.get('/health', async (req, res) => {
try {
const { connection, release } = await getDbConnection();
await connection.execute('SELECT 1 as test');
release();
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
pool: getPoolStatus(),
});
} catch (error) {
res.status(500).json({
status: 'unhealthy',
timestamp: new Date().toISOString(),
error: error.message,
});
}
});
// Helper functions
function calculateComparison(currentValue, previousValue) {
if (typeof previousValue !== 'number') {
return { absolute: null, percentage: null };
}
const absolute = typeof currentValue === 'number' ? currentValue - previousValue : null;
const percentage =
absolute !== null && previousValue !== 0
? (absolute / Math.abs(previousValue)) * 100
: null;
return { absolute, percentage };
}
function getPreviousPeriodRange(timeRange, startDate, endDate) {
if (timeRange && timeRange !== 'custom') {
const prevTimeRange = getPreviousTimeRange(timeRange);
if (!prevTimeRange || prevTimeRange === timeRange) {
return null;
}
return getTimeRangeConditions(prevTimeRange);
}
const hasCustomDates = (timeRange === 'custom' || !timeRange) && startDate && endDate;
if (!hasCustomDates) {
return null;
}
const start = new Date(startDate);
const end = new Date(endDate);
if (Number.isNaN(start.getTime()) || Number.isNaN(end.getTime())) {
return null;
}
const duration = end.getTime() - start.getTime();
if (!Number.isFinite(duration) || duration <= 0) {
return null;
}
const prevEnd = new Date(start.getTime() - 1);
const prevStart = new Date(prevEnd.getTime() - duration);
return getTimeRangeConditions('custom', prevStart.toISOString(), prevEnd.toISOString());
}
function getPreviousTimeRange(timeRange) {
const map = {
today: 'yesterday',
thisWeek: 'lastWeek',
thisMonth: 'lastMonth',
last7days: 'previous7days',
last30days: 'previous30days',
last90days: 'previous90days',
yesterday: 'twoDaysAgo'
};
return map[timeRange] || timeRange;
}
module.exports = router;

View File

@@ -0,0 +1,505 @@
const express = require('express');
const { DateTime } = require('luxon');
const router = express.Router();
const { getDbConnection, getPoolStatus } = require('../db/connection');
const TIMEZONE = 'America/New_York';
// Punch types from the database
const PUNCH_TYPES = {
OUT: 0,
IN: 1,
BREAK_START: 2,
BREAK_END: 3,
};
// Standard hours for overtime calculation (40 hours per week)
const STANDARD_WEEKLY_HOURS = 40;
// Reference pay period start date (January 25, 2026 is a Sunday, first day of a pay period)
const PAY_PERIOD_REFERENCE = DateTime.fromObject(
{ year: 2026, month: 1, day: 25 },
{ zone: TIMEZONE }
);
/**
* Calculate the pay period that contains a given date
* Pay periods are 14 days starting on Sunday
* @param {DateTime} date - The date to find the pay period for
* @returns {{ start: DateTime, end: DateTime, week1: { start: DateTime, end: DateTime }, week2: { start: DateTime, end: DateTime } }}
*/
function getPayPeriodForDate(date) {
const dt = DateTime.isDateTime(date) ? date : DateTime.fromJSDate(date, { zone: TIMEZONE });
// Calculate days since reference
const daysSinceReference = Math.floor(dt.diff(PAY_PERIOD_REFERENCE, 'days').days);
// Find which pay period this falls into (can be negative for dates before reference)
const payPeriodIndex = Math.floor(daysSinceReference / 14);
// Calculate the start of this pay period
const start = PAY_PERIOD_REFERENCE.plus({ days: payPeriodIndex * 14 }).startOf('day');
const end = start.plus({ days: 13 }).endOf('day');
// Week 1: Sunday through Saturday
const week1Start = start;
const week1End = start.plus({ days: 6 }).endOf('day');
// Week 2: Sunday through Saturday
const week2Start = start.plus({ days: 7 }).startOf('day');
const week2End = end;
return {
start,
end,
week1: { start: week1Start, end: week1End },
week2: { start: week2Start, end: week2End },
};
}
/**
* Get the current pay period
*/
function getCurrentPayPeriod() {
return getPayPeriodForDate(DateTime.now().setZone(TIMEZONE));
}
/**
* Navigate to previous or next pay period
* @param {DateTime} currentStart - Current pay period start
* @param {number} offset - Number of pay periods to move (negative for previous)
*/
function navigatePayPeriod(currentStart, offset) {
const newStart = currentStart.plus({ days: offset * 14 });
return getPayPeriodForDate(newStart);
}
/**
* Calculate working hours from timeclock entries, broken down by week
* @param {Array} punches - Timeclock punch entries
* @param {Object} payPeriod - Pay period with week boundaries
*/
function calculateHoursByWeek(punches, payPeriod) {
// Group by employee
const byEmployee = new Map();
punches.forEach(punch => {
if (!byEmployee.has(punch.EmployeeID)) {
byEmployee.set(punch.EmployeeID, {
employeeId: punch.EmployeeID,
firstname: punch.firstname || '',
lastname: punch.lastname || '',
punches: [],
});
}
byEmployee.get(punch.EmployeeID).punches.push(punch);
});
const employeeResults = [];
let totalHours = 0;
let totalBreakHours = 0;
let totalOvertimeHours = 0;
let totalRegularHours = 0;
let week1TotalHours = 0;
let week1TotalOvertime = 0;
let week2TotalHours = 0;
let week2TotalOvertime = 0;
byEmployee.forEach((employeeData) => {
// Sort punches by timestamp
employeeData.punches.sort((a, b) => new Date(a.TimeStamp) - new Date(b.TimeStamp));
// Calculate hours for each week
const week1Punches = employeeData.punches.filter(p => {
const dt = DateTime.fromJSDate(new Date(p.TimeStamp), { zone: TIMEZONE });
return dt >= payPeriod.week1.start && dt <= payPeriod.week1.end;
});
const week2Punches = employeeData.punches.filter(p => {
const dt = DateTime.fromJSDate(new Date(p.TimeStamp), { zone: TIMEZONE });
return dt >= payPeriod.week2.start && dt <= payPeriod.week2.end;
});
const week1Hours = calculateHoursFromPunches(week1Punches);
const week2Hours = calculateHoursFromPunches(week2Punches);
// Calculate overtime per week (anything over 40 hours)
const week1Overtime = Math.max(0, week1Hours.hours - STANDARD_WEEKLY_HOURS);
const week2Overtime = Math.max(0, week2Hours.hours - STANDARD_WEEKLY_HOURS);
const week1Regular = week1Hours.hours - week1Overtime;
const week2Regular = week2Hours.hours - week2Overtime;
const employeeTotal = week1Hours.hours + week2Hours.hours;
const employeeBreaks = week1Hours.breakHours + week2Hours.breakHours;
const employeeOvertime = week1Overtime + week2Overtime;
const employeeRegular = employeeTotal - employeeOvertime;
totalHours += employeeTotal;
totalBreakHours += employeeBreaks;
totalOvertimeHours += employeeOvertime;
totalRegularHours += employeeRegular;
week1TotalHours += week1Hours.hours;
week1TotalOvertime += week1Overtime;
week2TotalHours += week2Hours.hours;
week2TotalOvertime += week2Overtime;
employeeResults.push({
employeeId: employeeData.employeeId,
name: `${employeeData.firstname} ${employeeData.lastname}`.trim() || `Employee ${employeeData.employeeId}`,
week1Hours: week1Hours.hours,
week1BreakHours: week1Hours.breakHours,
week1Overtime,
week1Regular,
week2Hours: week2Hours.hours,
week2BreakHours: week2Hours.breakHours,
week2Overtime,
week2Regular,
totalHours: employeeTotal,
totalBreakHours: employeeBreaks,
overtimeHours: employeeOvertime,
regularHours: employeeRegular,
});
});
// Sort by total hours descending
employeeResults.sort((a, b) => b.totalHours - a.totalHours);
return {
byEmployee: employeeResults,
totals: {
hours: totalHours,
breakHours: totalBreakHours,
overtimeHours: totalOvertimeHours,
regularHours: totalRegularHours,
activeEmployees: employeeResults.filter(e => e.totalHours > 0).length,
},
byWeek: [
{
week: 1,
start: payPeriod.week1.start.toISODate(),
end: payPeriod.week1.end.toISODate(),
hours: week1TotalHours,
overtime: week1TotalOvertime,
regular: week1TotalHours - week1TotalOvertime,
},
{
week: 2,
start: payPeriod.week2.start.toISODate(),
end: payPeriod.week2.end.toISODate(),
hours: week2TotalHours,
overtime: week2TotalOvertime,
regular: week2TotalHours - week2TotalOvertime,
},
],
};
}
/**
* Calculate hours from a set of punches
*/
function calculateHoursFromPunches(punches) {
let hours = 0;
let breakHours = 0;
let currentIn = null;
let breakStart = null;
punches.forEach(punch => {
const punchTime = new Date(punch.TimeStamp);
switch (punch.PunchType) {
case PUNCH_TYPES.IN:
currentIn = punchTime;
break;
case PUNCH_TYPES.OUT:
if (currentIn) {
hours += (punchTime - currentIn) / (1000 * 60 * 60);
currentIn = null;
}
break;
case PUNCH_TYPES.BREAK_START:
breakStart = punchTime;
break;
case PUNCH_TYPES.BREAK_END:
if (breakStart) {
breakHours += (punchTime - breakStart) / (1000 * 60 * 60);
breakStart = null;
}
break;
}
});
return { hours, breakHours };
}
/**
* Calculate FTE for a pay period (based on 80 hours = 1 FTE for 2-week period)
* @param {number} totalHours - Total hours worked
* @param {number} elapsedFraction - Fraction of the period elapsed (0-1). Defaults to 1 for complete periods.
*/
function calculateFTE(totalHours, elapsedFraction = 1) {
const fullTimePeriodHours = STANDARD_WEEKLY_HOURS * 2; // 80 hours for 2 weeks
const proratedHours = fullTimePeriodHours * elapsedFraction;
return proratedHours > 0 ? totalHours / proratedHours : 0;
}
// Main payroll metrics endpoint
router.get('/', async (req, res) => {
const startTime = Date.now();
console.log(`[PAYROLL-METRICS] Starting request`);
const timeoutPromise = new Promise((_, reject) => {
setTimeout(() => reject(new Error('Request timeout after 30 seconds')), 30000);
});
try {
const mainOperation = async () => {
const { payPeriodStart, navigate } = req.query;
let payPeriod;
if (payPeriodStart) {
// Parse the provided start date
const startDate = DateTime.fromISO(payPeriodStart, { zone: TIMEZONE });
if (!startDate.isValid) {
return res.status(400).json({ error: 'Invalid payPeriodStart date format' });
}
payPeriod = getPayPeriodForDate(startDate);
} else {
// Default to current pay period
payPeriod = getCurrentPayPeriod();
}
// Handle navigation if requested
if (navigate) {
const offset = parseInt(navigate, 10);
if (!isNaN(offset)) {
payPeriod = navigatePayPeriod(payPeriod.start, offset);
}
}
console.log(`[PAYROLL-METRICS] Getting DB connection...`);
const { connection, release } = await getDbConnection();
console.log(`[PAYROLL-METRICS] DB connection obtained in ${Date.now() - startTime}ms`);
// Build query for the pay period
const periodStart = payPeriod.start.toJSDate();
const periodEnd = payPeriod.end.toJSDate();
const timeclockQuery = `
SELECT
tc.EmployeeID,
tc.TimeStamp,
tc.PunchType,
e.firstname,
e.lastname
FROM timeclock tc
LEFT JOIN employees e ON tc.EmployeeID = e.employeeid
WHERE tc.TimeStamp >= ? AND tc.TimeStamp <= ?
AND e.hidden = 0
AND e.disabled = 0
ORDER BY tc.EmployeeID, tc.TimeStamp
`;
const [timeclockRows] = await connection.execute(timeclockQuery, [periodStart, periodEnd]);
// Calculate hours with week breakdown
const hoursData = calculateHoursByWeek(timeclockRows, payPeriod);
// Calculate FTE — prorate for in-progress periods so the value reflects
// the pace employees are on rather than raw hours / 80
let elapsedFraction = 1;
if (isCurrentPayPeriod(payPeriod)) {
const now = DateTime.now().setZone(TIMEZONE);
const elapsedDays = Math.max(1, Math.ceil(now.diff(payPeriod.start, 'days').days));
elapsedFraction = Math.min(1, elapsedDays / 14);
}
const fte = calculateFTE(hoursData.totals.hours, elapsedFraction);
const activeEmployees = hoursData.totals.activeEmployees;
const avgHoursPerEmployee = activeEmployees > 0 ? hoursData.totals.hours / activeEmployees : 0;
// Get previous pay period data for comparison
const prevPayPeriod = navigatePayPeriod(payPeriod.start, -1);
const [prevTimeclockRows] = await connection.execute(timeclockQuery, [
prevPayPeriod.start.toJSDate(),
prevPayPeriod.end.toJSDate(),
]);
const prevHoursData = calculateHoursByWeek(prevTimeclockRows, prevPayPeriod);
const prevFte = calculateFTE(prevHoursData.totals.hours);
// Calculate comparisons
const comparison = {
hours: calculateComparison(hoursData.totals.hours, prevHoursData.totals.hours),
overtimeHours: calculateComparison(hoursData.totals.overtimeHours, prevHoursData.totals.overtimeHours),
fte: calculateComparison(fte, prevFte),
activeEmployees: calculateComparison(hoursData.totals.activeEmployees, prevHoursData.totals.activeEmployees),
};
const response = {
payPeriod: {
start: payPeriod.start.toISODate(),
end: payPeriod.end.toISODate(),
label: formatPayPeriodLabel(payPeriod),
week1: {
start: payPeriod.week1.start.toISODate(),
end: payPeriod.week1.end.toISODate(),
label: formatWeekLabel(payPeriod.week1),
},
week2: {
start: payPeriod.week2.start.toISODate(),
end: payPeriod.week2.end.toISODate(),
label: formatWeekLabel(payPeriod.week2),
},
isCurrent: isCurrentPayPeriod(payPeriod),
},
totals: {
hours: hoursData.totals.hours,
breakHours: hoursData.totals.breakHours,
overtimeHours: hoursData.totals.overtimeHours,
regularHours: hoursData.totals.regularHours,
activeEmployees,
fte,
avgHoursPerEmployee,
},
previousTotals: {
hours: prevHoursData.totals.hours,
overtimeHours: prevHoursData.totals.overtimeHours,
activeEmployees: prevHoursData.totals.activeEmployees,
fte: prevFte,
},
comparison,
byEmployee: hoursData.byEmployee,
byWeek: hoursData.byWeek,
};
return { response, release };
};
let result;
try {
result = await Promise.race([mainOperation(), timeoutPromise]);
} catch (error) {
if (error.message.includes('timeout')) {
console.log(`[PAYROLL-METRICS] Request timed out in ${Date.now() - startTime}ms`);
throw error;
}
throw error;
}
const { response, release } = result;
if (release) release();
console.log(`[PAYROLL-METRICS] Request completed in ${Date.now() - startTime}ms`);
res.json(response);
} catch (error) {
console.error('Error in /payroll-metrics:', error);
console.log(`[PAYROLL-METRICS] Request failed in ${Date.now() - startTime}ms`);
res.status(500).json({ error: error.message });
}
});
// Get pay period info endpoint (for navigation without full data)
router.get('/period-info', async (req, res) => {
try {
const { payPeriodStart, navigate } = req.query;
let payPeriod;
if (payPeriodStart) {
const startDate = DateTime.fromISO(payPeriodStart, { zone: TIMEZONE });
if (!startDate.isValid) {
return res.status(400).json({ error: 'Invalid payPeriodStart date format' });
}
payPeriod = getPayPeriodForDate(startDate);
} else {
payPeriod = getCurrentPayPeriod();
}
if (navigate) {
const offset = parseInt(navigate, 10);
if (!isNaN(offset)) {
payPeriod = navigatePayPeriod(payPeriod.start, offset);
}
}
res.json({
payPeriod: {
start: payPeriod.start.toISODate(),
end: payPeriod.end.toISODate(),
label: formatPayPeriodLabel(payPeriod),
week1: {
start: payPeriod.week1.start.toISODate(),
end: payPeriod.week1.end.toISODate(),
label: formatWeekLabel(payPeriod.week1),
},
week2: {
start: payPeriod.week2.start.toISODate(),
end: payPeriod.week2.end.toISODate(),
label: formatWeekLabel(payPeriod.week2),
},
isCurrent: isCurrentPayPeriod(payPeriod),
},
});
} catch (error) {
console.error('Error in /payroll-metrics/period-info:', error);
res.status(500).json({ error: error.message });
}
});
// Health check
router.get('/health', async (req, res) => {
try {
const { connection, release } = await getDbConnection();
await connection.execute('SELECT 1 as test');
release();
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
pool: getPoolStatus(),
});
} catch (error) {
res.status(500).json({
status: 'unhealthy',
timestamp: new Date().toISOString(),
error: error.message,
});
}
});
// Helper functions
function calculateComparison(currentValue, previousValue) {
if (typeof previousValue !== 'number') {
return { absolute: null, percentage: null };
}
const absolute = typeof currentValue === 'number' ? currentValue - previousValue : null;
const percentage =
absolute !== null && previousValue !== 0
? (absolute / Math.abs(previousValue)) * 100
: null;
return { absolute, percentage };
}
function formatPayPeriodLabel(payPeriod) {
const startStr = payPeriod.start.toFormat('MMM d');
const endStr = payPeriod.end.toFormat('MMM d, yyyy');
return `${startStr} ${endStr}`;
}
function formatWeekLabel(week) {
const startStr = week.start.toFormat('MMM d');
const endStr = week.end.toFormat('MMM d');
return `${startStr} ${endStr}`;
}
function isCurrentPayPeriod(payPeriod) {
const now = DateTime.now().setZone(TIMEZONE);
return now >= payPeriod.start && now <= payPeriod.end;
}
module.exports = router;

View File

@@ -0,0 +1,57 @@
const express = require('express');
const router = express.Router();
const { getDbConnection, getCachedQuery } = require('../db/connection');
// Test endpoint to count orders
router.get('/order-count', async (req, res) => {
try {
const { connection } = await getDbConnection();
// Simple query to count orders from _order table
const queryFn = async () => {
const [rows] = await connection.execute('SELECT COUNT(*) as count FROM _order');
return rows[0].count;
};
const cacheKey = 'order-count';
const count = await getCachedQuery(cacheKey, 'default', queryFn);
res.json({
success: true,
data: {
orderCount: count,
timestamp: new Date().toISOString()
}
});
} catch (error) {
console.error('Error fetching order count:', error);
res.status(500).json({
success: false,
error: error.message
});
}
});
// Test connection endpoint
router.get('/test-connection', async (req, res) => {
try {
const { connection } = await getDbConnection();
// Test the connection with a simple query
const [rows] = await connection.execute('SELECT 1 as test');
res.json({
success: true,
message: 'Database connection successful',
data: rows[0]
});
} catch (error) {
console.error('Error testing connection:', error);
res.status(500).json({
success: false,
error: error.message
});
}
});
module.exports = router;

View File

@@ -0,0 +1,102 @@
require('dotenv').config();
const express = require('express');
const cors = require('cors');
const morgan = require('morgan');
const compression = require('compression');
const fs = require('fs');
const path = require('path');
const { closeAllConnections } = require('./db/connection');
const app = express();
const PORT = process.env.ACOT_PORT || 3012;
// Create logs directory if it doesn't exist
const logDir = path.join(__dirname, 'logs/app');
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true });
}
// Create a write stream for access logs
const accessLogStream = fs.createWriteStream(
path.join(logDir, 'access.log'),
{ flags: 'a' }
);
// Middleware
app.use(compression());
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// Logging middleware
if (process.env.NODE_ENV === 'production') {
app.use(morgan('combined', { stream: accessLogStream }));
} else {
app.use(morgan('dev'));
}
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
service: 'acot-server',
timestamp: new Date().toISOString(),
uptime: process.uptime()
});
});
// Routes
app.use('/api/acot/test', require('./routes/test'));
app.use('/api/acot/events', require('./routes/events'));
app.use('/api/acot/discounts', require('./routes/discounts'));
app.use('/api/acot/employee-metrics', require('./routes/employee-metrics'));
app.use('/api/acot/payroll-metrics', require('./routes/payroll-metrics'));
app.use('/api/acot/operations-metrics', require('./routes/operations-metrics'));
// Error handling middleware
app.use((err, req, res, next) => {
console.error('Unhandled error:', err);
res.status(500).json({
success: false,
error: process.env.NODE_ENV === 'production'
? 'Internal server error'
: err.message
});
});
// 404 handler
app.use((req, res) => {
res.status(404).json({
success: false,
error: 'Route not found'
});
});
// Start server
const server = app.listen(PORT, () => {
console.log(`ACOT Server running on port ${PORT}`);
console.log(`Environment: ${process.env.NODE_ENV}`);
});
// Graceful shutdown
const gracefulShutdown = async () => {
console.log('SIGTERM signal received: closing HTTP server');
server.close(async () => {
console.log('HTTP server closed');
// Close database connections
try {
await closeAllConnections();
console.log('Database connections closed');
} catch (error) {
console.error('Error closing database connections:', error);
}
process.exit(0);
});
};
process.on('SIGTERM', gracefulShutdown);
process.on('SIGINT', gracefulShutdown);
module.exports = app;

View File

@@ -0,0 +1,312 @@
const { DateTime } = require('luxon');
const TIMEZONE = 'America/New_York';
const DB_TIMEZONE = 'UTC-05:00';
const BUSINESS_DAY_START_HOUR = 1; // 1 AM Eastern
const WEEK_START_DAY = 7; // Sunday (Luxon uses 1 = Monday, 7 = Sunday)
const DB_DATETIME_FORMAT = 'yyyy-LL-dd HH:mm:ss';
const isDateTime = (value) => DateTime.isDateTime(value);
const ensureDateTime = (value, { zone = TIMEZONE } = {}) => {
if (!value) return null;
if (isDateTime(value)) {
return value.setZone(zone);
}
if (value instanceof Date) {
return DateTime.fromJSDate(value, { zone });
}
if (typeof value === 'number') {
return DateTime.fromMillis(value, { zone });
}
if (typeof value === 'string') {
let dt = DateTime.fromISO(value, { zone, setZone: true });
if (!dt.isValid) {
dt = DateTime.fromSQL(value, { zone });
}
return dt.isValid ? dt : null;
}
return null;
};
const getNow = () => DateTime.now().setZone(TIMEZONE);
const getDayStart = (input = getNow()) => {
const dt = ensureDateTime(input);
if (!dt || !dt.isValid) {
const fallback = getNow();
return fallback.set({
hour: BUSINESS_DAY_START_HOUR,
minute: 0,
second: 0,
millisecond: 0
});
}
const sameDayStart = dt.set({
hour: BUSINESS_DAY_START_HOUR,
minute: 0,
second: 0,
millisecond: 0
});
return dt.hour < BUSINESS_DAY_START_HOUR
? sameDayStart.minus({ days: 1 })
: sameDayStart;
};
const getDayEnd = (input = getNow()) => {
return getDayStart(input).plus({ days: 1 }).minus({ milliseconds: 1 });
};
const getWeekStart = (input = getNow()) => {
const dt = ensureDateTime(input);
if (!dt || !dt.isValid) {
return getDayStart();
}
const startOfWeek = dt.set({ weekday: WEEK_START_DAY }).startOf('day');
const normalized = startOfWeek > dt ? startOfWeek.minus({ weeks: 1 }) : startOfWeek;
return normalized.set({
hour: BUSINESS_DAY_START_HOUR,
minute: 0,
second: 0,
millisecond: 0
});
};
const getRangeForTimeRange = (timeRange = 'today', now = getNow()) => {
const current = ensureDateTime(now);
if (!current || !current.isValid) {
throw new Error('Invalid reference time for range calculation');
}
switch (timeRange) {
case 'today': {
return {
start: getDayStart(current),
end: getDayEnd(current)
};
}
case 'yesterday': {
const target = current.minus({ days: 1 });
return {
start: getDayStart(target),
end: getDayEnd(target)
};
}
case 'twoDaysAgo': {
const target = current.minus({ days: 2 });
return {
start: getDayStart(target),
end: getDayEnd(target)
};
}
case 'thisWeek': {
return {
start: getWeekStart(current),
end: getDayEnd(current)
};
}
case 'lastWeek': {
const lastWeek = current.minus({ weeks: 1 });
const weekStart = getWeekStart(lastWeek);
const weekEnd = weekStart.plus({ days: 6 });
return {
start: weekStart,
end: getDayEnd(weekEnd)
};
}
case 'thisMonth': {
const dayStart = getDayStart(current);
const monthStart = dayStart.startOf('month').set({ hour: BUSINESS_DAY_START_HOUR });
return {
start: monthStart,
end: getDayEnd(current)
};
}
case 'lastMonth': {
const lastMonth = current.minus({ months: 1 });
const monthStart = lastMonth
.startOf('month')
.set({ hour: BUSINESS_DAY_START_HOUR, minute: 0, second: 0, millisecond: 0 });
const monthEnd = monthStart.plus({ months: 1 }).minus({ days: 1 });
return {
start: monthStart,
end: getDayEnd(monthEnd)
};
}
case 'last7days': {
const dayStart = getDayStart(current);
return {
start: dayStart.minus({ days: 6 }),
end: getDayEnd(current)
};
}
case 'last30days': {
const dayStart = getDayStart(current);
return {
start: dayStart.minus({ days: 29 }),
end: getDayEnd(current)
};
}
case 'last90days': {
const dayStart = getDayStart(current);
return {
start: dayStart.minus({ days: 89 }),
end: getDayEnd(current)
};
}
case 'previous7days': {
const currentPeriodStart = getDayStart(current).minus({ days: 6 });
const previousEndDay = currentPeriodStart.minus({ days: 1 });
const previousStartDay = previousEndDay.minus({ days: 6 });
return {
start: getDayStart(previousStartDay),
end: getDayEnd(previousEndDay)
};
}
case 'previous30days': {
const currentPeriodStart = getDayStart(current).minus({ days: 29 });
const previousEndDay = currentPeriodStart.minus({ days: 1 });
const previousStartDay = previousEndDay.minus({ days: 29 });
return {
start: getDayStart(previousStartDay),
end: getDayEnd(previousEndDay)
};
}
case 'previous90days': {
const currentPeriodStart = getDayStart(current).minus({ days: 89 });
const previousEndDay = currentPeriodStart.minus({ days: 1 });
const previousStartDay = previousEndDay.minus({ days: 89 });
return {
start: getDayStart(previousStartDay),
end: getDayEnd(previousEndDay)
};
}
default:
throw new Error(`Unknown time range: ${timeRange}`);
}
};
const toDatabaseSqlString = (dt) => {
const normalized = ensureDateTime(dt);
if (!normalized || !normalized.isValid) {
throw new Error('Invalid datetime provided for SQL conversion');
}
const dbTime = normalized.setZone(DB_TIMEZONE, { keepLocalTime: true });
return dbTime.toFormat(DB_DATETIME_FORMAT);
};
const formatBusinessDate = (input) => {
const dt = ensureDateTime(input);
if (!dt || !dt.isValid) return '';
return dt.setZone(TIMEZONE).toFormat('LLL d, yyyy');
};
const getTimeRangeLabel = (timeRange) => {
const labels = {
today: 'Today',
yesterday: 'Yesterday',
twoDaysAgo: 'Two Days Ago',
thisWeek: 'This Week',
lastWeek: 'Last Week',
thisMonth: 'This Month',
lastMonth: 'Last Month',
last7days: 'Last 7 Days',
last30days: 'Last 30 Days',
last90days: 'Last 90 Days',
previous7days: 'Previous 7 Days',
previous30days: 'Previous 30 Days',
previous90days: 'Previous 90 Days'
};
return labels[timeRange] || timeRange;
};
const getTimeRangeConditions = (timeRange, startDate, endDate) => {
if (timeRange === 'custom' && startDate && endDate) {
const start = ensureDateTime(startDate);
const end = ensureDateTime(endDate);
if (!start || !start.isValid || !end || !end.isValid) {
throw new Error('Invalid custom date range provided');
}
return {
whereClause: 'date_placed >= ? AND date_placed <= ?',
params: [toDatabaseSqlString(start), toDatabaseSqlString(end)],
dateRange: {
start: start.toUTC().toISO(),
end: end.toUTC().toISO(),
label: `${formatBusinessDate(start)} - ${formatBusinessDate(end)}`
}
};
}
const normalizedRange = timeRange || 'today';
const range = getRangeForTimeRange(normalizedRange);
return {
whereClause: 'date_placed >= ? AND date_placed <= ?',
params: [toDatabaseSqlString(range.start), toDatabaseSqlString(range.end)],
dateRange: {
start: range.start.toUTC().toISO(),
end: range.end.toUTC().toISO(),
label: getTimeRangeLabel(normalizedRange)
}
};
};
const getBusinessDayBounds = (timeRange) => {
const range = getRangeForTimeRange(timeRange);
return {
start: range.start.toJSDate(),
end: range.end.toJSDate()
};
};
const parseBusinessDate = (mysqlDatetime) => {
if (!mysqlDatetime || mysqlDatetime === '0000-00-00 00:00:00') {
return null;
}
const dt = DateTime.fromSQL(mysqlDatetime, { zone: DB_TIMEZONE });
if (!dt.isValid) {
console.error('[timeUtils] Failed to parse MySQL datetime:', mysqlDatetime, dt.invalidExplanation);
return null;
}
return dt.toUTC().toJSDate();
};
const formatMySQLDate = (input) => {
if (!input) return null;
const dt = ensureDateTime(input, { zone: 'utc' });
if (!dt || !dt.isValid) return null;
return dt.setZone(DB_TIMEZONE).toFormat(DB_DATETIME_FORMAT);
};
module.exports = {
getBusinessDayBounds,
getTimeRangeConditions,
formatBusinessDate,
getTimeRangeLabel,
parseBusinessDate,
formatMySQLDate,
// Expose helpers for tests or advanced consumers
_internal: {
getDayStart,
getDayEnd,
getWeekStart,
getRangeForTimeRange,
BUSINESS_DAY_START_HOUR
}
};

View File

@@ -0,0 +1,21 @@
# Server Configuration
NODE_ENV=development
AIRCALL_PORT=3002
LOG_LEVEL=info
# Aircall API Credentials
AIRCALL_API_ID=your_aircall_api_id
AIRCALL_API_TOKEN=your_aircall_api_token
# Database Configuration
MONGODB_URI=mongodb://localhost:27017/dashboard
MONGODB_DB=dashboard
REDIS_URL=redis://localhost:6379
# Service Configuration
TIMEZONE=America/New_York
DAY_STARTS_AT=1 # Business day starts at 1 AM ET
# Optional Settings
REDIS_TTL=300 # Cache TTL in seconds (5 minutes)
COLLECTION_NAME=aircall_daily_data

View File

@@ -0,0 +1,55 @@
# Aircall Server
A standalone server for handling Aircall metrics and data processing.
## Setup
1. Install dependencies:
```bash
npm install
```
2. Set up environment variables:
```bash
cp .env.example .env
```
Then edit `.env` with your configuration.
Required environment variables:
- `AIRCALL_API_ID`: Your Aircall API ID
- `AIRCALL_API_TOKEN`: Your Aircall API Token
- `MONGODB_URI`: MongoDB connection string
- `REDIS_URL`: Redis connection string
- `AIRCALL_PORT`: Server port (default: 3002)
## Running the Server
### Development
```bash
npm run dev
```
### Production
Using PM2:
```bash
pm2 start ecosystem.config.js --env production
```
## API Endpoints
### GET /api/aircall/metrics/:timeRange
Get Aircall metrics for a specific time range.
Parameters:
- `timeRange`: One of ['today', 'yesterday', 'last7days', 'last30days', 'last90days']
### GET /api/aircall/health
Get server health status.
## Architecture
The server uses:
- Express.js for the API
- MongoDB for data storage
- Redis for caching
- Winston for logging

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,23 @@
{
"name": "aircall-server",
"version": "1.0.0",
"description": "Aircall metrics server",
"type": "module",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js"
},
"dependencies": {
"axios": "^1.6.2",
"cors": "^2.8.5",
"dotenv": "^16.3.1",
"express": "^4.18.2",
"mongodb": "^6.3.0",
"redis": "^4.6.11",
"winston": "^3.11.0"
},
"devDependencies": {
"nodemon": "^3.0.2"
}
}

View File

@@ -0,0 +1,83 @@
import express from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import path from 'path';
import { fileURLToPath } from 'url';
import { createRoutes } from './src/routes/index.js';
import { aircallConfig } from './src/config/aircall.config.js';
import { connectMongoDB } from './src/utils/db.js';
import { createRedisClient } from './src/utils/redis.js';
import { createLogger } from './src/utils/logger.js';
// Get directory name in ES modules
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Load environment variables from the correct path
dotenv.config({ path: path.resolve(__dirname, '.env') });
// Validate required environment variables
const requiredEnvVars = ['AIRCALL_API_ID', 'AIRCALL_API_TOKEN', 'MONGODB_URI', 'REDIS_URL'];
const missingEnvVars = requiredEnvVars.filter(envVar => !process.env[envVar]);
if (missingEnvVars.length > 0) {
console.error('Missing required environment variables:', missingEnvVars);
process.exit(1);
}
const app = express();
const port = process.env.AIRCALL_PORT || 3002;
const logger = createLogger('aircall-server');
// Middleware
app.use(cors());
app.use(express.json());
// Connect to databases
let mongodb;
let redis;
async function initializeServer() {
try {
// Connect to MongoDB
mongodb = await connectMongoDB();
logger.info('Connected to MongoDB');
// Connect to Redis
redis = await createRedisClient();
logger.info('Connected to Redis');
// Initialize configs with database connections
const configs = {
aircall: {
...aircallConfig,
mongodb,
redis,
logger
}
};
// Initialize routes
const routes = createRoutes(configs, logger);
app.use('/api', routes);
// Error handling middleware
app.use((err, req, res, next) => {
logger.error('Server error:', err);
res.status(500).json({
error: 'Internal server error',
message: err.message
});
});
// Start server
app.listen(port, () => {
logger.info(`Aircall server listening on port ${port}`);
});
} catch (error) {
logger.error('Failed to initialize server:', error);
process.exit(1);
}
}
initializeServer();

View File

@@ -0,0 +1,15 @@
export const aircallConfig = {
serviceName: 'aircall',
apiId: process.env.AIRCALL_API_ID,
apiToken: process.env.AIRCALL_API_TOKEN,
timezone: 'America/New_York',
dayStartsAt: 1,
storeHistory: true,
collection: 'aircall_daily_data',
redisTTL: 300, // 5 minutes cache for current day
endpoints: {
metrics: {
ttl: 300
}
}
};

View File

@@ -0,0 +1,57 @@
import express from 'express';
import { AircallService } from '../services/aircall/AircallService.js';
export const createAircallRoutes = (config, logger) => {
const router = express.Router();
const aircallService = new AircallService(config);
router.get('/metrics/:timeRange?', async (req, res) => {
try {
const { timeRange = 'today' } = req.params;
const allowedRanges = ['today', 'yesterday', 'last7days', 'last30days', 'last90days'];
if (!allowedRanges.includes(timeRange)) {
return res.status(400).json({
error: 'Invalid time range',
allowedRanges
});
}
const metrics = await aircallService.getMetrics(timeRange);
res.json({
...metrics,
_meta: {
timeRange,
generatedAt: new Date().toISOString(),
dataPoints: metrics.daily_data?.length || 0
}
});
} catch (error) {
logger.error('Error fetching Aircall metrics:', error);
res.status(500).json({
error: 'Failed to fetch Aircall metrics',
message: error.message
});
}
});
// Health check endpoint
router.get('/health', (req, res) => {
const mongoConnected = !!aircallService.mongodb?.db;
const redisConnected = !!aircallService.redis?.isOpen;
const health = {
status: mongoConnected && redisConnected ? 'ok' : 'degraded',
service: 'aircall',
timestamp: new Date().toISOString(),
connections: {
mongodb: mongoConnected,
redis: redisConnected
}
};
res.json(health);
});
return router;
};

View File

@@ -0,0 +1,32 @@
import express from 'express';
import { createAircallRoutes } from './aircall.routes.js';
export const createRoutes = (configs, logger) => {
const router = express.Router();
// Mount Aircall routes
router.use('/aircall', createAircallRoutes(configs.aircall, logger));
// Health check endpoint
router.get('/health', (req, res) => {
const services = req.services || {};
res.status(200).json({
status: 'ok',
timestamp: new Date(),
services: {
redis: services.redis?.isReady || false,
mongodb: services.mongo?.readyState === 1 || false
}
});
});
// Catch-all 404 handler
router.use('*', (req, res) => {
res.status(404).json({
error: 'Not Found',
message: `Route ${req.originalUrl} not found`
});
});
return router;
};

View File

@@ -0,0 +1,298 @@
import { DataManager } from "../base/DataManager.js";
export class AircallDataManager extends DataManager {
constructor(mongodb, redis, timeManager) {
const options = {
collection: "aircall_daily_data",
redisTTL: 300 // 5 minutes cache
};
super(mongodb, redis, timeManager, options);
this.options = options;
}
ensureDate(d) {
if (d instanceof Date) return d;
if (typeof d === 'string') return new Date(d);
if (typeof d === 'number') return new Date(d);
console.error('Invalid date value:', d);
return new Date(); // fallback to current date
}
async storeHistoricalPeriod(start, end, calls) {
if (!this.mongodb) return;
try {
if (!Array.isArray(calls)) {
console.error("Invalid calls data:", calls);
return;
}
// Group calls by true day boundaries using TimeManager
const dailyCallsMap = new Map();
calls.forEach((call) => {
try {
const timestamp = call.started_at * 1000; // Convert to milliseconds
const callDate = this.ensureDate(timestamp);
const dayBounds = this.timeManager.getDayBounds(callDate);
const dayKey = dayBounds.start.toISOString();
if (!dailyCallsMap.has(dayKey)) {
dailyCallsMap.set(dayKey, {
date: dayBounds.start,
calls: [],
});
}
dailyCallsMap.get(dayKey).calls.push(call);
} catch (err) {
console.error('Error processing call:', err, call);
}
});
// Iterate over each day in the period using day boundaries
const dates = [];
let currentDate = this.ensureDate(start);
const endDate = this.ensureDate(end);
while (currentDate < endDate) {
const dayBounds = this.timeManager.getDayBounds(currentDate);
dates.push(dayBounds.start);
currentDate.setUTCDate(currentDate.getUTCDate() + 1);
}
for (const date of dates) {
try {
const dateKey = date.toISOString();
const dayData = dailyCallsMap.get(dateKey);
const dayCalls = dayData ? dayData.calls : [];
// Process calls for this day using the same processing logic
const metrics = this.processCallData(dayCalls);
// Insert a daily_data record for this day
metrics.daily_data = [
{
date: date.toISOString().split("T")[0],
inbound: metrics.by_direction.inbound,
outbound: metrics.by_direction.outbound,
},
];
// Store this day's processed data as historical
await this.storeHistoricalDay(date, metrics);
} catch (err) {
console.error('Error processing date:', err, date);
}
}
} catch (error) {
console.error("Error storing historical period:", error, error.stack);
throw error;
}
}
processCallData(calls) {
// If calls is already processed (has total, by_direction, etc.), return it
if (calls && calls.total !== undefined) {
console.log('Data already processed:', {
total: calls.total,
by_direction: calls.by_direction
});
// Return a clean copy of the processed data
return {
total: calls.total,
by_direction: calls.by_direction,
by_status: calls.by_status,
by_missed_reason: calls.by_missed_reason,
by_hour: calls.by_hour,
by_users: calls.by_users,
daily_data: calls.daily_data,
duration_distribution: calls.duration_distribution,
average_duration: calls.average_duration
};
}
console.log('Processing raw calls:', {
count: calls.length,
sample: calls.length > 0 ? {
id: calls[0].id,
direction: calls[0].direction,
status: calls[0].status
} : null
});
// Process raw calls
const metrics = {
total: calls.length,
by_direction: { inbound: 0, outbound: 0 },
by_status: { answered: 0, missed: 0 },
by_missed_reason: {},
by_hour: Array(24).fill(0),
by_users: {},
daily_data: [],
duration_distribution: [
{ range: "0-1m", count: 0 },
{ range: "1-5m", count: 0 },
{ range: "5-15m", count: 0 },
{ range: "15-30m", count: 0 },
{ range: "30m+", count: 0 },
],
average_duration: 0,
total_duration: 0,
};
// Group calls by date for daily data
const dailyCallsMap = new Map();
calls.forEach((call) => {
try {
// Direction metrics
metrics.by_direction[call.direction]++;
// Get call date and hour using TimeManager
const timestamp = call.started_at * 1000; // Convert to milliseconds
const callDate = this.ensureDate(timestamp);
const dayBounds = this.timeManager.getDayBounds(callDate);
const dayKey = dayBounds.start.toISOString().split("T")[0];
const hour = callDate.getHours();
metrics.by_hour[hour]++;
// Status and duration metrics
if (call.answered_at) {
metrics.by_status.answered++;
const duration = call.ended_at - call.answered_at;
metrics.total_duration += duration;
// Duration distribution
if (duration <= 60) {
metrics.duration_distribution[0].count++;
} else if (duration <= 300) {
metrics.duration_distribution[1].count++;
} else if (duration <= 900) {
metrics.duration_distribution[2].count++;
} else if (duration <= 1800) {
metrics.duration_distribution[3].count++;
} else {
metrics.duration_distribution[4].count++;
}
// Track user performance
if (call.user) {
const userId = call.user.id;
if (!metrics.by_users[userId]) {
metrics.by_users[userId] = {
id: userId,
name: call.user.name,
total: 0,
answered: 0,
missed: 0,
total_duration: 0,
average_duration: 0,
};
}
metrics.by_users[userId].total++;
metrics.by_users[userId].answered++;
metrics.by_users[userId].total_duration += duration;
}
} else {
metrics.by_status.missed++;
if (call.missed_call_reason) {
metrics.by_missed_reason[call.missed_call_reason] =
(metrics.by_missed_reason[call.missed_call_reason] || 0) + 1;
}
// Track missed calls by user
if (call.user) {
const userId = call.user.id;
if (!metrics.by_users[userId]) {
metrics.by_users[userId] = {
id: userId,
name: call.user.name,
total: 0,
answered: 0,
missed: 0,
total_duration: 0,
average_duration: 0,
};
}
metrics.by_users[userId].total++;
metrics.by_users[userId].missed++;
}
}
// Group by date for daily data
if (!dailyCallsMap.has(dayKey)) {
dailyCallsMap.set(dayKey, { date: dayKey, inbound: 0, outbound: 0 });
}
dailyCallsMap.get(dayKey)[call.direction]++;
} catch (err) {
console.error('Error processing call:', err, call);
}
});
// Calculate average durations for users
Object.values(metrics.by_users).forEach((user) => {
if (user.answered > 0) {
user.average_duration = Math.round(user.total_duration / user.answered);
}
});
// Calculate global average duration
if (metrics.by_status.answered > 0) {
metrics.average_duration = Math.round(
metrics.total_duration / metrics.by_status.answered
);
}
// Convert daily data map to sorted array
metrics.daily_data = Array.from(dailyCallsMap.values()).sort((a, b) =>
a.date.localeCompare(b.date)
);
delete metrics.total_duration;
console.log('Processed metrics:', {
total: metrics.total,
by_direction: metrics.by_direction,
by_status: metrics.by_status,
daily_data_count: metrics.daily_data.length
});
return metrics;
}
async storeHistoricalDay(date, data) {
if (!this.mongodb) return;
try {
const collection = this.mongodb.collection(this.options.collection);
const dayBounds = this.timeManager.getDayBounds(this.ensureDate(date));
// Ensure consistent data structure with metrics nested in data field
const document = {
date: dayBounds.start,
data: {
total: data.total,
by_direction: data.by_direction,
by_status: data.by_status,
by_missed_reason: data.by_missed_reason,
by_hour: data.by_hour,
by_users: data.by_users,
daily_data: data.daily_data,
duration_distribution: data.duration_distribution,
average_duration: data.average_duration
},
updatedAt: new Date()
};
await collection.updateOne(
{ date: dayBounds.start },
{ $set: document },
{ upsert: true }
);
} catch (error) {
console.error("Error storing historical day:", error);
throw error;
}
}
}

View File

@@ -0,0 +1,138 @@
import axios from "axios";
import { Buffer } from "buffer";
import { BaseService } from "../base/BaseService.js";
import { AircallDataManager } from "./AircallDataManager.js";
export class AircallService extends BaseService {
constructor(config) {
super(config);
this.baseUrl = "https://api.aircall.io/v1";
console.log('Initializing Aircall service with credentials:', {
apiId: config.apiId ? 'present' : 'missing',
apiToken: config.apiToken ? 'present' : 'missing'
});
this.auth = Buffer.from(`${config.apiId}:${config.apiToken}`).toString(
"base64"
);
this.dataManager = new AircallDataManager(
this.mongodb,
this.redis,
this.timeManager
);
if (!config.apiId || !config.apiToken) {
throw new Error("Aircall API credentials are required");
}
}
async getMetrics(timeRange) {
const dateRange = await this.timeManager.getDateRange(timeRange);
console.log('Fetching metrics for date range:', {
start: dateRange.start.toISOString(),
end: dateRange.end.toISOString()
});
return this.dataManager.getData(dateRange, async (range) => {
const calls = await this.fetchAllCalls(range.start, range.end);
console.log('Fetched calls:', {
count: calls.length,
sample: calls.length > 0 ? calls[0] : null
});
return calls;
});
}
async fetchAllCalls(start, end) {
try {
let allCalls = [];
let currentPage = 1;
let hasMore = true;
let totalPages = null;
while (hasMore) {
const response = await this.makeRequest("/calls", {
from: Math.floor(start.getTime() / 1000),
to: Math.floor(end.getTime() / 1000),
order: "asc",
page: currentPage,
per_page: 50,
});
console.log('API Response:', {
page: currentPage,
totalPages: response.meta.total_pages,
callsCount: response.calls?.length,
params: {
from: Math.floor(start.getTime() / 1000),
to: Math.floor(end.getTime() / 1000)
}
});
if (!response.calls) {
throw new Error("Invalid API response format");
}
allCalls = [...allCalls, ...response.calls];
hasMore = response.meta.next_page_link !== null;
totalPages = response.meta.total_pages;
currentPage++;
if (hasMore) {
// Rate limiting pause
await new Promise((resolve) => setTimeout(resolve, 1));
}
}
return allCalls;
} catch (error) {
console.error("Error fetching all calls:", error);
throw error;
}
}
async makeRequest(endpoint, params = {}) {
try {
console.log('Making API request:', {
endpoint,
params
});
const response = await axios.get(`${this.baseUrl}${endpoint}`, {
headers: {
Authorization: `Basic ${this.auth}`,
"Content-Type": "application/json",
},
params,
});
return response.data;
} catch (error) {
if (error.response?.status === 429) {
console.log("Rate limit reached, waiting before retry...");
await new Promise((resolve) => setTimeout(resolve, 5000));
return this.makeRequest(endpoint, params);
}
this.handleApiError(error, `Error making request to ${endpoint}`);
}
}
validateApiResponse(response, context = "") {
if (!response || typeof response !== "object") {
throw new Error(`${context}: Invalid API response format`);
}
if (response.error) {
throw new Error(`${context}: ${response.error}`);
}
return true;
}
getPaginationInfo(meta) {
return {
currentPage: meta.current_page,
totalPages: meta.total_pages,
hasNextPage: meta.next_page_link !== null,
totalRecords: meta.total,
};
}
}

View File

@@ -0,0 +1,32 @@
import { createTimeManager } from '../../utils/timeUtils.js';
export class BaseService {
constructor(config) {
this.config = config;
this.mongodb = config.mongodb;
this.redis = config.redis;
this.logger = config.logger;
this.timeManager = createTimeManager(config.timezone, config.dayStartsAt);
}
handleApiError(error, context = '') {
this.logger.error(`API Error ${context}:`, {
message: error.message,
status: error.response?.status,
data: error.response?.data,
});
if (error.response) {
const status = error.response.status;
const message = error.response.data?.message || error.response.statusText;
if (status === 429) {
throw new Error('API rate limit exceeded. Please try again later.');
}
throw new Error(`API error (${status}): ${message}`);
}
throw error;
}
}

View File

@@ -0,0 +1,320 @@
export class DataManager {
constructor(mongodb, redis, timeManager, options) {
this.mongodb = mongodb;
this.redis = redis;
this.timeManager = timeManager;
this.options = options || {};
}
ensureDate(d) {
if (d instanceof Date) return d;
if (typeof d === 'string') return new Date(d);
if (typeof d === 'number') return new Date(d);
if (d && d.date) return new Date(d.date); // Handle MongoDB records
console.error('Invalid date value:', d);
return new Date(); // fallback to current date
}
async getData(dateRange, fetchFn) {
try {
// Get historical data from MongoDB
const historicalData = await this.getHistoricalDays(dateRange.start, dateRange.end);
// Find any missing date ranges
const missingRanges = this.findMissingDateRanges(dateRange.start, dateRange.end, historicalData);
// Fetch missing data
for (const range of missingRanges) {
const data = await fetchFn(range);
await this.storeHistoricalPeriod(range.start, range.end, data);
}
// Get updated historical data
const updatedData = await this.getHistoricalDays(dateRange.start, dateRange.end);
// Handle both nested and flat data structures
if (updatedData && updatedData.length > 0) {
// Process each record and combine them
const processedData = updatedData.map(record => {
if (record.data) {
return record.data;
}
if (record.total !== undefined) {
return {
total: record.total,
by_direction: record.by_direction,
by_status: record.by_status,
by_missed_reason: record.by_missed_reason,
by_hour: record.by_hour,
by_users: record.by_users,
daily_data: record.daily_data,
duration_distribution: record.duration_distribution,
average_duration: record.average_duration
};
}
return null;
}).filter(Boolean);
// Combine the data
if (processedData.length > 0) {
return this.combineMetrics(processedData);
}
}
// Otherwise process as raw call data
return this.processCallData(updatedData);
} catch (error) {
console.error('Error in getData:', error);
throw error;
}
}
findMissingDateRanges(start, end, existingDates) {
const missingRanges = [];
const existingDatesSet = new Set(
existingDates.map((d) => {
// Handle both nested and flat data structures
const date = d.date ? d.date : d;
return this.ensureDate(date).toISOString().split("T")[0];
})
);
let current = new Date(start);
const endDate = new Date(end);
while (current < endDate) {
const dayBounds = this.timeManager.getDayBounds(current);
const dayKey = dayBounds.start.toISOString().split("T")[0];
if (!existingDatesSet.has(dayKey)) {
// Found a missing day
const missingStart = new Date(dayBounds.start);
const missingEnd = new Date(dayBounds.end);
missingRanges.push({
start: missingStart,
end: missingEnd,
});
}
// Move to the next day using timeManager to ensure proper business day boundaries
current = new Date(dayBounds.end.getTime() + 1);
}
return missingRanges;
}
async getCurrentDay(fetchFn) {
const now = new Date();
const todayBounds = this.timeManager.getDayBounds(now);
const todayKey = this.timeManager.formatDate(todayBounds.start);
const cacheKey = `${this.options.collection}:current_day:${todayKey}`;
try {
// Check cache first
if (this.redis?.isOpen) {
const cached = await this.redis.get(cacheKey);
if (cached) {
const parsedCache = JSON.parse(cached);
if (parsedCache.total !== undefined) {
// Use timeManager to check if the cached data is for today
const cachedDate = new Date(parsedCache.daily_data[0].date);
const isToday = this.timeManager.isToday(cachedDate);
if (isToday) {
return parsedCache;
}
}
}
}
// Get safe end time that's never in the future
const safeEnd = this.timeManager.getCurrentBusinessDayEnd();
// Fetch and process current day data with safe end time
const data = await fetchFn({
start: todayBounds.start,
end: safeEnd
});
if (!data) {
return null;
}
// Cache the data with a shorter TTL for today's data
if (this.redis?.isOpen) {
const ttl = Math.min(
this.options.redisTTL,
60 * 5 // 5 minutes max for today's data
);
await this.redis.set(cacheKey, JSON.stringify(data), {
EX: ttl,
});
}
return data;
} catch (error) {
console.error('Error in getCurrentDay:', error);
throw error;
}
}
getDayCount(start, end) {
// Calculate full days between dates using timeManager
const startDay = this.timeManager.getDayBounds(start);
const endDay = this.timeManager.getDayBounds(end);
return Math.ceil((endDay.end - startDay.start) / (24 * 60 * 60 * 1000));
}
async fetchMissingDays(start, end, existingData, fetchFn) {
const existingDates = new Set(
existingData.map((d) => this.timeManager.formatDate(d.date))
);
const missingData = [];
let currentDate = new Date(start);
while (currentDate < end) {
const dayBounds = this.timeManager.getDayBounds(currentDate);
const dateString = this.timeManager.formatDate(dayBounds.start);
if (!existingDates.has(dateString)) {
const data = await fetchFn({
start: dayBounds.start,
end: dayBounds.end,
});
await this.storeHistoricalDay(dayBounds.start, data);
missingData.push(data);
}
// Move to next day using timeManager to ensure proper business day boundaries
currentDate = new Date(dayBounds.end.getTime() + 1);
}
return missingData;
}
async getHistoricalDays(start, end) {
try {
if (!this.mongodb) return [];
const collection = this.mongodb.collection(this.options.collection);
const startDay = this.timeManager.getDayBounds(start);
const endDay = this.timeManager.getDayBounds(end);
const records = await collection
.find({
date: {
$gte: startDay.start,
$lt: endDay.start,
},
})
.sort({ date: 1 })
.toArray();
return records;
} catch (error) {
console.error('Error getting historical days:', error);
return [];
}
}
combineMetrics(metricsArray) {
if (!metricsArray || metricsArray.length === 0) return null;
if (metricsArray.length === 1) return metricsArray[0];
const combined = {
total: 0,
by_direction: { inbound: 0, outbound: 0 },
by_status: { answered: 0, missed: 0 },
by_missed_reason: {},
by_hour: Array(24).fill(0),
by_users: {},
daily_data: [],
duration_distribution: [
{ range: '0-1m', count: 0 },
{ range: '1-5m', count: 0 },
{ range: '5-15m', count: 0 },
{ range: '15-30m', count: 0 },
{ range: '30m+', count: 0 }
],
average_duration: 0
};
let totalAnswered = 0;
let totalDuration = 0;
metricsArray.forEach(metrics => {
// Sum basic metrics
combined.total += metrics.total;
combined.by_direction.inbound += metrics.by_direction.inbound;
combined.by_direction.outbound += metrics.by_direction.outbound;
combined.by_status.answered += metrics.by_status.answered;
combined.by_status.missed += metrics.by_status.missed;
// Combine missed reasons
Object.entries(metrics.by_missed_reason).forEach(([reason, count]) => {
combined.by_missed_reason[reason] = (combined.by_missed_reason[reason] || 0) + count;
});
// Sum hourly data
metrics.by_hour.forEach((count, hour) => {
combined.by_hour[hour] += count;
});
// Combine user data
Object.entries(metrics.by_users).forEach(([userId, userData]) => {
if (!combined.by_users[userId]) {
combined.by_users[userId] = {
id: userData.id,
name: userData.name,
total: 0,
answered: 0,
missed: 0,
total_duration: 0,
average_duration: 0
};
}
combined.by_users[userId].total += userData.total;
combined.by_users[userId].answered += userData.answered;
combined.by_users[userId].missed += userData.missed;
combined.by_users[userId].total_duration += userData.total_duration || 0;
});
// Combine duration distribution
metrics.duration_distribution.forEach((dist, index) => {
combined.duration_distribution[index].count += dist.count;
});
// Accumulate for average duration calculation
if (metrics.average_duration && metrics.by_status.answered) {
totalDuration += metrics.average_duration * metrics.by_status.answered;
totalAnswered += metrics.by_status.answered;
}
// Merge daily data
if (metrics.daily_data) {
combined.daily_data.push(...metrics.daily_data);
}
});
// Calculate final average duration
if (totalAnswered > 0) {
combined.average_duration = Math.round(totalDuration / totalAnswered);
}
// Calculate user averages
Object.values(combined.by_users).forEach(user => {
if (user.answered > 0) {
user.average_duration = Math.round(user.total_duration / user.answered);
}
});
// Sort and deduplicate daily data
combined.daily_data = Array.from(
new Map(combined.daily_data.map(item => [item.date, item])).values()
).sort((a, b) => a.date.localeCompare(b.date));
return combined;
}
}

View File

@@ -0,0 +1,15 @@
import { MongoClient } from 'mongodb';
const MONGODB_URI = process.env.MONGODB_URI || 'mongodb://localhost:27017/dashboard';
const DB_NAME = process.env.MONGODB_DB || 'dashboard';
export async function connectMongoDB() {
try {
const client = await MongoClient.connect(MONGODB_URI);
console.log('Connected to MongoDB');
return client.db(DB_NAME);
} catch (error) {
console.error('MongoDB connection error:', error);
throw error;
}
}

View File

@@ -0,0 +1,37 @@
import winston from 'winston';
import path from 'path';
import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
export function createLogger(service) {
// Create logs directory relative to the project root (two levels up from utils)
const logsDir = path.join(__dirname, '../../logs');
return winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
defaultMeta: { service },
transports: [
// Write all logs to console
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
// Write all logs to service-specific files
new winston.transports.File({
filename: path.join(logsDir, `${service}-error.log`),
level: 'error'
}),
new winston.transports.File({
filename: path.join(logsDir, `${service}-combined.log`)
})
]
});
}

View File

@@ -0,0 +1,23 @@
import { createClient } from 'redis';
const REDIS_URL = process.env.REDIS_URL || 'redis://localhost:6379';
export async function createRedisClient() {
try {
const client = createClient({
url: REDIS_URL
});
await client.connect();
console.log('Connected to Redis');
client.on('error', (err) => {
console.error('Redis error:', err);
});
return client;
} catch (error) {
console.error('Redis connection error:', error);
throw error;
}
}

View File

@@ -0,0 +1,262 @@
class TimeManager {
static ALLOWED_RANGES = ['today', 'yesterday', 'last2days', 'last7days', 'last30days', 'last90days',
'previous7days', 'previous30days', 'previous90days'];
constructor(timezone = 'America/New_York', dayStartsAt = 1) {
this.timezone = timezone;
this.dayStartsAt = dayStartsAt;
}
getDayBounds(date) {
try {
const now = new Date();
const targetDate = new Date(date);
// For today
if (
targetDate.getUTCFullYear() === now.getUTCFullYear() &&
targetDate.getUTCMonth() === now.getUTCMonth() &&
targetDate.getUTCDate() === now.getUTCDate()
) {
// If current time is before day start (1 AM ET / 6 AM UTC),
// use previous day's start until now
const todayStart = new Date(Date.UTC(
now.getUTCFullYear(),
now.getUTCMonth(),
now.getUTCDate(),
this.dayStartsAt + 5,
0,
0,
0
));
if (now < todayStart) {
const yesterdayStart = new Date(todayStart);
yesterdayStart.setUTCDate(yesterdayStart.getUTCDate() - 1);
return { start: yesterdayStart, end: now };
}
return { start: todayStart, end: now };
}
// For past days, use full 24-hour period
const normalizedDate = new Date(Date.UTC(
targetDate.getUTCFullYear(),
targetDate.getUTCMonth(),
targetDate.getUTCDate()
));
const dayStart = new Date(normalizedDate);
dayStart.setUTCHours(this.dayStartsAt + 5, 0, 0, 0);
const dayEnd = new Date(dayStart);
dayEnd.setUTCDate(dayEnd.getUTCDate() + 1);
return { start: dayStart, end: dayEnd };
} catch (error) {
console.error('Error in getDayBounds:', error);
throw new Error(`Failed to calculate day bounds: ${error.message}`);
}
}
getDateRange(period) {
try {
const now = new Date();
const todayBounds = this.getDayBounds(now);
const end = new Date();
switch (period) {
case 'today':
return {
start: todayBounds.start,
end
};
case 'yesterday': {
const yesterday = new Date(now);
yesterday.setDate(yesterday.getDate() - 1);
return this.getDayBounds(yesterday);
}
case 'last2days': {
const twoDaysAgo = new Date(now);
twoDaysAgo.setDate(twoDaysAgo.getDate() - 2);
return this.getDayBounds(twoDaysAgo);
}
case 'last7days': {
const start = new Date(now);
start.setDate(start.getDate() - 6);
return {
start: this.getDayBounds(start).start,
end
};
}
case 'previous7days': {
const end = new Date(now);
end.setDate(end.getDate() - 7);
const start = new Date(end);
start.setDate(start.getDate() - 6);
return {
start: this.getDayBounds(start).start,
end: this.getDayBounds(end).end
};
}
case 'last30days': {
const start = new Date(now);
start.setDate(start.getDate() - 29);
return {
start: this.getDayBounds(start).start,
end
};
}
case 'previous30days': {
const end = new Date(now);
end.setDate(end.getDate() - 30);
const start = new Date(end);
start.setDate(start.getDate() - 29);
return {
start: this.getDayBounds(start).start,
end: this.getDayBounds(end).end
};
}
case 'last90days': {
const start = new Date(now);
start.setDate(start.getDate() - 89);
return {
start: this.getDayBounds(start).start,
end
};
}
case 'previous90days': {
const end = new Date(now);
end.setDate(end.getDate() - 90);
const start = new Date(end);
start.setDate(start.getDate() - 89);
return {
start: this.getDayBounds(start).start,
end: this.getDayBounds(end).end
};
}
default:
throw new Error(`Unsupported time period: ${period}`);
}
} catch (error) {
console.error('Error in getDateRange:', error);
throw error;
}
}
getPreviousPeriod(period) {
try {
const now = new Date();
switch (period) {
case 'today':
return 'yesterday';
case 'yesterday': {
// Return bounds for 2 days ago
const twoDaysAgo = new Date(now);
twoDaysAgo.setDate(twoDaysAgo.getDate() - 2);
return this.getDayBounds(twoDaysAgo);
}
case 'last7days': {
// Return bounds for previous 7 days
const end = new Date(now);
end.setDate(end.getDate() - 7);
const start = new Date(end);
start.setDate(start.getDate() - 7);
return {
start: this.getDayBounds(start).start,
end: this.getDayBounds(end).end
};
}
case 'last30days': {
const end = new Date(now);
end.setDate(end.getDate() - 30);
const start = new Date(end);
start.setDate(start.getDate() - 30);
return {
start: this.getDayBounds(start).start,
end: this.getDayBounds(end).end
};
}
case 'last90days': {
const end = new Date(now);
end.setDate(end.getDate() - 90);
const start = new Date(end);
start.setDate(start.getDate() - 90);
return {
start: this.getDayBounds(start).start,
end: this.getDayBounds(end).end
};
}
default:
throw new Error(`Unsupported time period: ${period}`);
}
} catch (error) {
console.error('Error in getPreviousPeriod:', error);
throw error;
}
}
getCurrentBusinessDayEnd() {
try {
const now = new Date();
const todayBounds = this.getDayBounds(now);
// If current time is before day start (1 AM ET / 6 AM UTC),
// then we're still in yesterday's business day
const todayStart = new Date(Date.UTC(
now.getUTCFullYear(),
now.getUTCMonth(),
now.getUTCDate(),
this.dayStartsAt + 5,
0,
0,
0
));
if (now < todayStart) {
const yesterdayBounds = this.getDayBounds(new Date(now.getTime() - 24 * 60 * 60 * 1000));
return yesterdayBounds.end;
}
// Return the earlier of current time or today's end
return now < todayBounds.end ? now : todayBounds.end;
} catch (error) {
console.error('Error in getCurrentBusinessDayEnd:', error);
return new Date();
}
}
isValidTimeRange(timeRange) {
return TimeManager.ALLOWED_RANGES.includes(timeRange);
}
isToday(date) {
const now = new Date();
const targetDate = new Date(date);
return (
targetDate.getUTCFullYear() === now.getUTCFullYear() &&
targetDate.getUTCMonth() === now.getUTCMonth() &&
targetDate.getUTCDate() === now.getUTCDate()
);
}
formatDate(date) {
try {
return date.toLocaleString('en-US', {
timeZone: this.timezone,
year: 'numeric',
month: '2-digit',
day: '2-digit',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
} catch (error) {
console.error('Error formatting date:', error);
return date.toISOString();
}
}
}
export const createTimeManager = (timezone, dayStartsAt) => new TimeManager(timezone, dayStartsAt);

View File

@@ -0,0 +1,10 @@
# Server Configuration
NODE_ENV=development
PORT=3003
# Authentication
JWT_SECRET=your-secret-key-here
DASHBOARD_PASSWORD=your-dashboard-password-here
# Cookie Settings
COOKIE_DOMAIN=localhost # In production: .kent.pw

View File

@@ -0,0 +1,203 @@
// auth-server/index.js
const path = require('path');
require('dotenv').config({ path: path.join(__dirname, '.env') });
const express = require('express');
const cors = require('cors');
const cookieParser = require('cookie-parser');
const jwt = require('jsonwebtoken');
// Debug environment variables
console.log('Environment variables loaded from:', path.join(__dirname, '.env'));
console.log('Current directory:', __dirname);
console.log('Available env vars:', Object.keys(process.env));
const app = express();
const PORT = process.env.PORT || 3003;
const JWT_SECRET = process.env.JWT_SECRET;
const DASHBOARD_PASSWORD = process.env.DASHBOARD_PASSWORD;
// Validate required environment variables
if (!JWT_SECRET || !DASHBOARD_PASSWORD) {
console.error('Missing required environment variables:');
if (!JWT_SECRET) console.error('- JWT_SECRET');
if (!DASHBOARD_PASSWORD) console.error('- DASHBOARD_PASSWORD');
process.exit(1);
}
// Middleware
app.use(express.json());
app.use(cookieParser());
// Configure CORS
const corsOptions = {
origin: function(origin, callback) {
const allowedOrigins = [
'http://localhost:3000',
'https://tools.acherryontop.com'
];
console.log('CORS check for origin:', origin);
// Allow local network IPs (192.168.1.xxx)
if (origin && origin.match(/^http:\/\/192\.168\.1\.\d{1,3}(:\d+)?$/)) {
callback(null, true);
return;
}
// Check if origin is in allowed list
if (!origin || allowedOrigins.indexOf(origin) !== -1) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true,
methods: ['GET', 'POST', 'OPTIONS'],
allowedHeaders: ['Content-Type', 'Authorization', 'Cookie', 'Accept'],
exposedHeaders: ['Set-Cookie']
};
app.use(cors(corsOptions));
app.options('*', cors(corsOptions));
// Debug logging
app.use((req, res, next) => {
console.log(`${new Date().toISOString()} ${req.method} ${req.url}`);
console.log('Headers:', req.headers);
console.log('Cookies:', req.cookies);
next();
});
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'ok',
timestamp: new Date().toISOString()
});
});
// Auth endpoints
app.post('/login', (req, res) => {
console.log('Login attempt received');
console.log('Request body:', req.body);
console.log('Origin:', req.headers.origin);
const { password } = req.body;
if (!password) {
console.log('No password provided');
return res.status(400).json({
success: false,
message: 'Password is required'
});
}
console.log('Comparing passwords...');
console.log('Provided password length:', password.length);
console.log('Expected password length:', DASHBOARD_PASSWORD.length);
if (password === DASHBOARD_PASSWORD) {
console.log('Password matched');
const token = jwt.sign({ authorized: true }, JWT_SECRET, {
expiresIn: '24h'
});
// Determine if request is from local network
const isLocalNetwork = req.headers.origin?.includes('192.168.1.') || req.headers.origin?.includes('localhost');
const cookieOptions = {
httpOnly: true,
secure: !isLocalNetwork, // Only use secure for non-local requests
sameSite: isLocalNetwork ? 'lax' : 'none',
path: '/',
maxAge: 24 * 60 * 60 * 1000 // 24 hours
};
// Only set domain for production
if (!isLocalNetwork) {
cookieOptions.domain = '.kent.pw';
}
console.log('Setting cookie with options:', cookieOptions);
res.cookie('token', token, cookieOptions);
console.log('Response headers:', res.getHeaders());
res.json({
success: true,
debug: {
origin: req.headers.origin,
cookieOptions
}
});
} else {
console.log('Password mismatch');
res.status(401).json({
success: false,
message: 'Invalid password'
});
}
});
// Modify the check endpoint to log more info
app.get('/check', (req, res) => {
console.log('Auth check received');
console.log('All cookies:', req.cookies);
console.log('Headers:', req.headers);
const token = req.cookies.token;
if (!token) {
console.log('No token found in cookies');
return res.status(401).json({
authenticated: false,
error: 'no_token'
});
}
try {
const decoded = jwt.verify(token, JWT_SECRET);
console.log('Token verified successfully:', decoded);
res.json({ authenticated: true });
} catch (err) {
console.log('Token verification failed:', err.message);
res.status(401).json({
authenticated: false,
error: 'invalid_token',
message: err.message
});
}
});
app.post('/logout', (req, res) => {
const isLocalNetwork = req.headers.origin?.includes('192.168.1.') || req.headers.origin?.includes('localhost');
const cookieOptions = {
httpOnly: true,
secure: !isLocalNetwork,
sameSite: isLocalNetwork ? 'lax' : 'none',
path: '/',
domain: isLocalNetwork ? undefined : '.kent.pw'
};
console.log('Clearing cookie with options:', cookieOptions);
res.clearCookie('token', cookieOptions);
res.json({ success: true });
});
// Error handling middleware
app.use((err, req, res, next) => {
console.error('Server error:', err);
res.status(500).json({
success: false,
message: 'Internal server error',
error: err.message
});
});
// Start server
app.listen(PORT, () => {
console.log(`Auth server running on port ${PORT}`);
console.log('Environment:', process.env.NODE_ENV);
console.log('CORS origins:', corsOptions.origin);
console.log('JWT_SECRET length:', JWT_SECRET?.length);
console.log('DASHBOARD_PASSWORD length:', DASHBOARD_PASSWORD?.length);
});

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
{
"name": "auth-server",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"cookie-parser": "^1.4.7",
"cors": "^2.8.5",
"date-fns": "^4.1.0",
"date-fns-tz": "^3.2.0",
"dotenv": "^16.4.7",
"express": "^4.21.1",
"express-session": "^1.18.1",
"jsonwebtoken": "^9.0.2"
}
}

View File

@@ -0,0 +1 @@
/etc/nginx/sites-enabled/dashboard.conf

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
{
"name": "google-analytics-server",
"version": "1.0.0",
"description": "Google Analytics server for dashboard",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js"
},
"dependencies": {
"@google-analytics/data": "^4.0.0",
"cors": "^2.8.5",
"dotenv": "^16.3.1",
"express": "^4.18.2",
"redis": "^4.6.11",
"winston": "^3.11.0"
},
"devDependencies": {
"nodemon": "^3.0.2"
}
}

View File

@@ -0,0 +1,254 @@
const express = require('express');
const { BetaAnalyticsDataClient } = require('@google-analytics/data');
const router = express.Router();
const logger = require('../utils/logger');
// Initialize GA4 client
const analyticsClient = new BetaAnalyticsDataClient({
credentials: JSON.parse(process.env.GOOGLE_APPLICATION_CREDENTIALS_JSON)
});
const propertyId = process.env.GA_PROPERTY_ID;
// Cache durations
const CACHE_DURATIONS = {
REALTIME_BASIC: 60, // 1 minute
REALTIME_DETAILED: 300, // 5 minutes
BASIC_METRICS: 3600, // 1 hour
USER_BEHAVIOR: 3600 // 1 hour
};
// Basic metrics endpoint
router.get('/metrics', async (req, res) => {
try {
const { startDate = '7daysAgo' } = req.query;
const cacheKey = `analytics:basic_metrics:${startDate}`;
// Check Redis cache
const cachedData = await req.redisClient.get(cacheKey);
if (cachedData) {
logger.info('Returning cached basic metrics data');
return res.json({ success: true, data: JSON.parse(cachedData) });
}
// Fetch from GA4
const [response] = await analyticsClient.runReport({
property: `properties/${propertyId}`,
dateRanges: [{ startDate, endDate: 'today' }],
dimensions: [{ name: 'date' }],
metrics: [
{ name: 'activeUsers' },
{ name: 'newUsers' },
{ name: 'averageSessionDuration' },
{ name: 'screenPageViews' },
{ name: 'bounceRate' },
{ name: 'conversions' }
],
returnPropertyQuota: true
});
// Cache the response
await req.redisClient.set(cacheKey, JSON.stringify(response), {
EX: CACHE_DURATIONS.BASIC_METRICS
});
res.json({ success: true, data: response });
} catch (error) {
logger.error('Error fetching basic metrics:', error);
res.status(500).json({ success: false, error: error.message });
}
});
// Realtime basic data endpoint
router.get('/realtime/basic', async (req, res) => {
try {
const cacheKey = 'analytics:realtime:basic';
// Check Redis cache
const cachedData = await req.redisClient.get(cacheKey);
if (cachedData) {
logger.info('Returning cached realtime basic data');
return res.json({ success: true, data: JSON.parse(cachedData) });
}
// Fetch active users
const [userResponse] = await analyticsClient.runRealtimeReport({
property: `properties/${propertyId}`,
metrics: [{ name: 'activeUsers' }],
returnPropertyQuota: true
});
// Fetch last 5 minutes
const [fiveMinResponse] = await analyticsClient.runRealtimeReport({
property: `properties/${propertyId}`,
metrics: [{ name: 'activeUsers' }],
minuteRanges: [{ startMinutesAgo: 5, endMinutesAgo: 0 }]
});
// Fetch time series data
const [timeSeriesResponse] = await analyticsClient.runRealtimeReport({
property: `properties/${propertyId}`,
dimensions: [{ name: 'minutesAgo' }],
metrics: [{ name: 'activeUsers' }]
});
const response = {
userResponse,
fiveMinResponse,
timeSeriesResponse,
quotaInfo: {
projectHourly: userResponse.propertyQuota.tokensPerProjectPerHour,
daily: userResponse.propertyQuota.tokensPerDay,
serverErrors: userResponse.propertyQuota.serverErrorsPerProjectPerHour,
thresholdedRequests: userResponse.propertyQuota.potentiallyThresholdedRequestsPerHour
}
};
// Cache the response
await req.redisClient.set(cacheKey, JSON.stringify(response), {
EX: CACHE_DURATIONS.REALTIME_BASIC
});
res.json({ success: true, data: response });
} catch (error) {
logger.error('Error fetching realtime basic data:', error);
res.status(500).json({ success: false, error: error.message });
}
});
// Realtime detailed data endpoint
router.get('/realtime/detailed', async (req, res) => {
try {
const cacheKey = 'analytics:realtime:detailed';
// Check Redis cache
const cachedData = await req.redisClient.get(cacheKey);
if (cachedData) {
logger.info('Returning cached realtime detailed data');
return res.json({ success: true, data: JSON.parse(cachedData) });
}
// Fetch current pages
const [pageResponse] = await analyticsClient.runRealtimeReport({
property: `properties/${propertyId}`,
dimensions: [{ name: 'unifiedScreenName' }],
metrics: [{ name: 'screenPageViews' }],
orderBy: [{ metric: { metricName: 'screenPageViews' }, desc: true }],
limit: 25
});
// Fetch events
const [eventResponse] = await analyticsClient.runRealtimeReport({
property: `properties/${propertyId}`,
dimensions: [{ name: 'eventName' }],
metrics: [{ name: 'eventCount' }],
orderBy: [{ metric: { metricName: 'eventCount' }, desc: true }],
limit: 25
});
// Fetch device categories
const [deviceResponse] = await analyticsClient.runRealtimeReport({
property: `properties/${propertyId}`,
dimensions: [{ name: 'deviceCategory' }],
metrics: [{ name: 'activeUsers' }],
orderBy: [{ metric: { metricName: 'activeUsers' }, desc: true }],
limit: 10,
returnPropertyQuota: true
});
const response = {
pageResponse,
eventResponse,
sourceResponse: deviceResponse
};
// Cache the response
await req.redisClient.set(cacheKey, JSON.stringify(response), {
EX: CACHE_DURATIONS.REALTIME_DETAILED
});
res.json({ success: true, data: response });
} catch (error) {
logger.error('Error fetching realtime detailed data:', error);
res.status(500).json({ success: false, error: error.message });
}
});
// User behavior endpoint
router.get('/user-behavior', async (req, res) => {
try {
const { timeRange = '30' } = req.query;
const cacheKey = `analytics:user_behavior:${timeRange}`;
// Check Redis cache
const cachedData = await req.redisClient.get(cacheKey);
if (cachedData) {
logger.info('Returning cached user behavior data');
return res.json({ success: true, data: JSON.parse(cachedData) });
}
// Fetch page data
const [pageResponse] = await analyticsClient.runReport({
property: `properties/${propertyId}`,
dateRanges: [{ startDate: `${timeRange}daysAgo`, endDate: 'today' }],
dimensions: [{ name: 'pagePath' }],
metrics: [
{ name: 'screenPageViews' },
{ name: 'averageSessionDuration' },
{ name: 'bounceRate' },
{ name: 'sessions' }
],
orderBy: [{
metric: { metricName: 'screenPageViews' },
desc: true
}],
limit: 25
});
// Fetch device data
const [deviceResponse] = await analyticsClient.runReport({
property: `properties/${propertyId}`,
dateRanges: [{ startDate: `${timeRange}daysAgo`, endDate: 'today' }],
dimensions: [{ name: 'deviceCategory' }],
metrics: [
{ name: 'screenPageViews' },
{ name: 'sessions' }
]
});
// Fetch source data
const [sourceResponse] = await analyticsClient.runReport({
property: `properties/${propertyId}`,
dateRanges: [{ startDate: `${timeRange}daysAgo`, endDate: 'today' }],
dimensions: [{ name: 'sessionSource' }],
metrics: [
{ name: 'sessions' },
{ name: 'conversions' }
],
orderBy: [{
metric: { metricName: 'sessions' },
desc: true
}],
limit: 25,
returnPropertyQuota: true
});
const response = {
pageResponse,
deviceResponse,
sourceResponse
};
// Cache the response
await req.redisClient.set(cacheKey, JSON.stringify(response), {
EX: CACHE_DURATIONS.USER_BEHAVIOR
});
res.json({ success: true, data: response });
} catch (error) {
logger.error('Error fetching user behavior data:', error);
res.status(500).json({ success: false, error: error.message });
}
});
module.exports = router;

View File

@@ -0,0 +1,91 @@
const express = require('express');
const router = express.Router();
const analyticsService = require('../services/analytics.service');
// Basic metrics endpoint
router.get('/metrics', async (req, res) => {
try {
const { startDate = '7daysAgo' } = req.query;
console.log(`Fetching metrics with startDate: ${startDate}`);
const data = await analyticsService.getBasicMetrics(startDate);
res.json({ success: true, data });
} catch (error) {
console.error('Metrics error:', {
startDate: req.query.startDate,
error: error.message,
stack: error.stack
});
res.status(500).json({
success: false,
error: 'Failed to fetch metrics',
details: error.message
});
}
});
// Realtime basic data endpoint
router.get('/realtime/basic', async (req, res) => {
try {
console.log('Fetching realtime basic data');
const data = await analyticsService.getRealTimeBasicData();
res.json({ success: true, data });
} catch (error) {
console.error('Realtime basic error:', {
error: error.message,
stack: error.stack
});
res.status(500).json({
success: false,
error: 'Failed to fetch realtime basic data',
details: error.message
});
}
});
// Realtime detailed data endpoint
router.get('/realtime/detailed', async (req, res) => {
try {
console.log('Fetching realtime detailed data');
const data = await analyticsService.getRealTimeDetailedData();
res.json({ success: true, data });
} catch (error) {
console.error('Realtime detailed error:', {
error: error.message,
stack: error.stack
});
res.status(500).json({
success: false,
error: 'Failed to fetch realtime detailed data',
details: error.message
});
}
});
// User behavior endpoint
router.get('/user-behavior', async (req, res) => {
try {
const { timeRange = '30' } = req.query;
console.log(`Fetching user behavior with timeRange: ${timeRange}`);
const data = await analyticsService.getUserBehavior(timeRange);
res.json({ success: true, data });
} catch (error) {
console.error('User behavior error:', {
timeRange: req.query.timeRange,
error: error.message,
stack: error.stack
});
res.status(500).json({
success: false,
error: 'Failed to fetch user behavior data',
details: error.message
});
}
});
module.exports = router;

View File

@@ -0,0 +1,65 @@
const express = require('express');
const cors = require('cors');
const { createClient } = require('redis');
const analyticsRoutes = require('./routes/analytics.routes');
const app = express();
const port = process.env.GOOGLE_ANALYTICS_PORT || 3007;
// Redis client setup
const redisClient = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379'
});
redisClient.on('error', (err) => console.error('Redis Client Error:', err));
redisClient.on('connect', () => console.log('Redis Client Connected'));
// Connect to Redis
(async () => {
try {
await redisClient.connect();
} catch (err) {
console.error('Redis connection error:', err);
}
})();
// Middleware
app.use(cors());
app.use(express.json());
// Make Redis client available in requests
app.use((req, res, next) => {
req.redisClient = redisClient;
next();
});
// Routes
app.use('/api/analytics', analyticsRoutes);
// Error handling middleware
app.use((err, req, res, next) => {
console.error('Server error:', err);
res.status(err.status || 500).json({
success: false,
message: err.message || 'Internal server error',
error: process.env.NODE_ENV === 'production' ? err : {}
});
});
// Start server
app.listen(port, () => {
console.log(`Google Analytics server running on port ${port}`);
});
// Handle graceful shutdown
process.on('SIGTERM', async () => {
console.log('SIGTERM received. Shutting down gracefully...');
await redisClient.quit();
process.exit(0);
});
process.on('SIGINT', async () => {
console.log('SIGINT received. Shutting down gracefully...');
await redisClient.quit();
process.exit(0);
});

View File

@@ -0,0 +1,283 @@
const { BetaAnalyticsDataClient } = require('@google-analytics/data');
const { createClient } = require('redis');
class AnalyticsService {
constructor() {
// Initialize Redis client
this.redis = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379'
});
this.redis.on('error', err => console.error('Redis Client Error:', err));
this.redis.connect().catch(err => console.error('Redis connection error:', err));
try {
// Initialize GA4 client
const credentials = process.env.GOOGLE_APPLICATION_CREDENTIALS_JSON;
this.analyticsClient = new BetaAnalyticsDataClient({
credentials: typeof credentials === 'string' ? JSON.parse(credentials) : credentials
});
this.propertyId = process.env.GA_PROPERTY_ID;
} catch (error) {
console.error('Failed to initialize GA4 client:', error);
throw error;
}
}
// Cache durations
CACHE_DURATIONS = {
REALTIME_BASIC: 60, // 1 minute
REALTIME_DETAILED: 300, // 5 minutes
BASIC_METRICS: 3600, // 1 hour
USER_BEHAVIOR: 3600 // 1 hour
};
async getBasicMetrics(startDate = '7daysAgo') {
const cacheKey = `analytics:basic_metrics:${startDate}`;
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log('Analytics metrics found in Redis cache');
return JSON.parse(cachedData);
}
// Fetch from GA4
console.log('Fetching fresh metrics data from GA4');
const [response] = await this.analyticsClient.runReport({
property: `properties/${this.propertyId}`,
dateRanges: [{ startDate, endDate: 'today' }],
dimensions: [{ name: 'date' }],
metrics: [
{ name: 'activeUsers' },
{ name: 'newUsers' },
{ name: 'averageSessionDuration' },
{ name: 'screenPageViews' },
{ name: 'bounceRate' },
{ name: 'conversions' }
],
returnPropertyQuota: true
});
// Cache the response
await this.redis.set(cacheKey, JSON.stringify(response), {
EX: this.CACHE_DURATIONS.BASIC_METRICS
});
return response;
} catch (error) {
console.error('Error fetching analytics metrics:', {
error: error.message,
stack: error.stack
});
throw error;
}
}
async getRealTimeBasicData() {
const cacheKey = 'analytics:realtime:basic';
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log('Realtime basic data found in Redis cache');
return JSON.parse(cachedData);
}
console.log('Fetching fresh realtime data from GA4');
// Fetch active users
const [userResponse] = await this.analyticsClient.runRealtimeReport({
property: `properties/${this.propertyId}`,
metrics: [{ name: 'activeUsers' }],
returnPropertyQuota: true
});
// Fetch last 5 minutes
const [fiveMinResponse] = await this.analyticsClient.runRealtimeReport({
property: `properties/${this.propertyId}`,
metrics: [{ name: 'activeUsers' }],
minuteRanges: [{ startMinutesAgo: 5, endMinutesAgo: 0 }]
});
// Fetch time series data
const [timeSeriesResponse] = await this.analyticsClient.runRealtimeReport({
property: `properties/${this.propertyId}`,
dimensions: [{ name: 'minutesAgo' }],
metrics: [{ name: 'activeUsers' }]
});
const response = {
userResponse,
fiveMinResponse,
timeSeriesResponse,
quotaInfo: {
projectHourly: userResponse.propertyQuota.tokensPerProjectPerHour,
daily: userResponse.propertyQuota.tokensPerDay,
serverErrors: userResponse.propertyQuota.serverErrorsPerProjectPerHour,
thresholdedRequests: userResponse.propertyQuota.potentiallyThresholdedRequestsPerHour
}
};
// Cache the response
await this.redis.set(cacheKey, JSON.stringify(response), {
EX: this.CACHE_DURATIONS.REALTIME_BASIC
});
return response;
} catch (error) {
console.error('Error fetching realtime basic data:', {
error: error.message,
stack: error.stack
});
throw error;
}
}
async getRealTimeDetailedData() {
const cacheKey = 'analytics:realtime:detailed';
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log('Realtime detailed data found in Redis cache');
return JSON.parse(cachedData);
}
console.log('Fetching fresh realtime detailed data from GA4');
// Fetch current pages
const [pageResponse] = await this.analyticsClient.runRealtimeReport({
property: `properties/${this.propertyId}`,
dimensions: [{ name: 'unifiedScreenName' }],
metrics: [{ name: 'screenPageViews' }],
orderBy: [{ metric: { metricName: 'screenPageViews' }, desc: true }],
limit: 25
});
// Fetch events
const [eventResponse] = await this.analyticsClient.runRealtimeReport({
property: `properties/${this.propertyId}`,
dimensions: [{ name: 'eventName' }],
metrics: [{ name: 'eventCount' }],
orderBy: [{ metric: { metricName: 'eventCount' }, desc: true }],
limit: 25
});
// Fetch device categories
const [deviceResponse] = await this.analyticsClient.runRealtimeReport({
property: `properties/${this.propertyId}`,
dimensions: [{ name: 'deviceCategory' }],
metrics: [{ name: 'activeUsers' }],
orderBy: [{ metric: { metricName: 'activeUsers' }, desc: true }],
limit: 10,
returnPropertyQuota: true
});
const response = {
pageResponse,
eventResponse,
sourceResponse: deviceResponse
};
// Cache the response
await this.redis.set(cacheKey, JSON.stringify(response), {
EX: this.CACHE_DURATIONS.REALTIME_DETAILED
});
return response;
} catch (error) {
console.error('Error fetching realtime detailed data:', {
error: error.message,
stack: error.stack
});
throw error;
}
}
async getUserBehavior(timeRange = '30') {
const cacheKey = `analytics:user_behavior:${timeRange}`;
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log('User behavior data found in Redis cache');
return JSON.parse(cachedData);
}
console.log('Fetching fresh user behavior data from GA4');
// Fetch page data
const [pageResponse] = await this.analyticsClient.runReport({
property: `properties/${this.propertyId}`,
dateRanges: [{ startDate: `${timeRange}daysAgo`, endDate: 'today' }],
dimensions: [{ name: 'pagePath' }],
metrics: [
{ name: 'screenPageViews' },
{ name: 'averageSessionDuration' },
{ name: 'bounceRate' },
{ name: 'sessions' }
],
orderBy: [{
metric: { metricName: 'screenPageViews' },
desc: true
}],
limit: 25
});
// Fetch device data
const [deviceResponse] = await this.analyticsClient.runReport({
property: `properties/${this.propertyId}`,
dateRanges: [{ startDate: `${timeRange}daysAgo`, endDate: 'today' }],
dimensions: [{ name: 'deviceCategory' }],
metrics: [
{ name: 'screenPageViews' },
{ name: 'sessions' }
]
});
// Fetch source data
const [sourceResponse] = await this.analyticsClient.runReport({
property: `properties/${this.propertyId}`,
dateRanges: [{ startDate: `${timeRange}daysAgo`, endDate: 'today' }],
dimensions: [{ name: 'sessionSource' }],
metrics: [
{ name: 'sessions' },
{ name: 'conversions' }
],
orderBy: [{
metric: { metricName: 'sessions' },
desc: true
}],
limit: 25,
returnPropertyQuota: true
});
const response = {
pageResponse,
deviceResponse,
sourceResponse
};
// Cache the response
await this.redis.set(cacheKey, JSON.stringify(response), {
EX: this.CACHE_DURATIONS.USER_BEHAVIOR
});
return response;
} catch (error) {
console.error('Error fetching user behavior data:', {
error: error.message,
stack: error.stack
});
throw error;
}
}
}
module.exports = new AnalyticsService();

View File

@@ -0,0 +1,35 @@
const winston = require('winston');
const path = require('path');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.File({
filename: path.join(__dirname, '../logs/pm2/error.log'),
level: 'error',
maxsize: 10485760, // 10MB
maxFiles: 5
}),
new winston.transports.File({
filename: path.join(__dirname, '../logs/pm2/combined.log'),
maxsize: 10485760, // 10MB
maxFiles: 5
})
]
});
// Add console transport in development
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}));
}
module.exports = logger;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
{
"name": "gorgias-server",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"axios": "^1.7.9",
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"express": "^4.21.2",
"redis": "^4.7.0"
}
}

View File

@@ -0,0 +1,119 @@
const express = require('express');
const router = express.Router();
const gorgiasService = require('../services/gorgias.service');
// Get statistics
router.post('/stats/:name', async (req, res) => {
try {
const { name } = req.params;
const filters = req.body;
console.log(`Fetching ${name} statistics with filters:`, filters);
if (!name) {
return res.status(400).json({
error: 'Missing statistic name',
details: 'The name parameter is required'
});
}
const data = await gorgiasService.getStatistics(name, filters);
if (!data) {
return res.status(404).json({
error: 'No data found',
details: `No statistics found for ${name}`
});
}
res.json({ data });
} catch (error) {
console.error('Statistics error:', {
name: req.params.name,
filters: req.body,
error: error.message,
stack: error.stack,
response: error.response?.data
});
// Handle specific error cases
if (error.response?.status === 401) {
return res.status(401).json({
error: 'Authentication failed',
details: 'Invalid Gorgias API credentials'
});
}
if (error.response?.status === 404) {
return res.status(404).json({
error: 'Not found',
details: `Statistics type '${req.params.name}' not found`
});
}
if (error.response?.status === 400) {
return res.status(400).json({
error: 'Invalid request',
details: error.response?.data?.message || 'The request was invalid',
data: error.response?.data
});
}
res.status(500).json({
error: 'Failed to fetch statistics',
details: error.response?.data?.message || error.message,
data: error.response?.data
});
}
});
// Get tickets
router.get('/tickets', async (req, res) => {
try {
const data = await gorgiasService.getTickets(req.query);
res.json(data);
} catch (error) {
console.error('Tickets error:', {
params: req.query,
error: error.message,
response: error.response?.data
});
if (error.response?.status === 401) {
return res.status(401).json({
error: 'Authentication failed',
details: 'Invalid Gorgias API credentials'
});
}
if (error.response?.status === 400) {
return res.status(400).json({
error: 'Invalid request',
details: error.response?.data?.message || 'The request was invalid',
data: error.response?.data
});
}
res.status(500).json({
error: 'Failed to fetch tickets',
details: error.response?.data?.message || error.message,
data: error.response?.data
});
}
});
// Get customer satisfaction
router.get('/satisfaction', async (req, res) => {
try {
const data = await gorgiasService.getCustomerSatisfaction(req.query);
res.json(data);
} catch (error) {
console.error('Satisfaction error:', error);
res.status(500).json({
error: 'Failed to fetch customer satisfaction',
details: error.response?.data || error.message
});
}
});
module.exports = router;

View File

@@ -0,0 +1,31 @@
const express = require('express');
const cors = require('cors');
const path = require('path');
require('dotenv').config({
path: path.resolve(__dirname, '.env')
});
const app = express();
const port = process.env.PORT || 3006;
app.use(cors());
app.use(express.json());
// Import routes
const gorgiasRoutes = require('./routes/gorgias.routes');
// Use routes
app.use('/api/gorgias', gorgiasRoutes);
// Error handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});
// Start server
app.listen(port, () => {
console.log(`Gorgias API server running on port ${port}`);
});
module.exports = app;

View File

@@ -0,0 +1,119 @@
const axios = require('axios');
const { createClient } = require('redis');
class GorgiasService {
constructor() {
this.redis = createClient({
url: process.env.REDIS_URL
});
this.redis.on('error', err => console.error('Redis Client Error:', err));
this.redis.connect().catch(err => console.error('Redis connection error:', err));
// Create base64 encoded auth string
const auth = Buffer.from(`${process.env.GORGIAS_API_USERNAME}:${process.env.GORGIAS_API_KEY}`).toString('base64');
this.apiClient = axios.create({
baseURL: `https://${process.env.GORGIAS_DOMAIN}.gorgias.com/api`,
headers: {
'Authorization': `Basic ${auth}`,
'Content-Type': 'application/json'
}
});
}
async getStatistics(name, filters = {}) {
const cacheKey = `gorgias:stats:${name}:${JSON.stringify(filters)}`;
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log(`Statistics ${name} found in Redis cache`);
return JSON.parse(cachedData);
}
console.log(`Fetching ${name} statistics with filters:`, filters);
// Convert dates to UTC midnight if not already set
if (!filters.start_datetime || !filters.end_datetime) {
const start = new Date(filters.start_datetime || filters.start_date);
start.setUTCHours(0, 0, 0, 0);
const end = new Date(filters.end_datetime || filters.end_date);
end.setUTCHours(23, 59, 59, 999);
filters = {
...filters,
start_datetime: start.toISOString(),
end_datetime: end.toISOString()
};
}
// Fetch from API
const response = await this.apiClient.post(`/stats/${name}`, filters);
const data = response.data;
// Save to Redis with 5 minute expiry
await this.redis.set(cacheKey, JSON.stringify(data), {
EX: 300 // 5 minutes
});
return data;
} catch (error) {
console.error(`Error in getStatistics for ${name}:`, {
error: error.message,
filters,
response: error.response?.data
});
throw error;
}
}
async getTickets(params = {}) {
const cacheKey = `gorgias:tickets:${JSON.stringify(params)}`;
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log('Tickets found in Redis cache');
return JSON.parse(cachedData);
}
// Convert dates to UTC midnight
const formattedParams = { ...params };
if (params.start_date) {
const start = new Date(params.start_date);
start.setUTCHours(0, 0, 0, 0);
formattedParams.start_datetime = start.toISOString();
delete formattedParams.start_date;
}
if (params.end_date) {
const end = new Date(params.end_date);
end.setUTCHours(23, 59, 59, 999);
formattedParams.end_datetime = end.toISOString();
delete formattedParams.end_date;
}
// Fetch from API
const response = await this.apiClient.get('/tickets', { params: formattedParams });
const data = response.data;
// Save to Redis with 5 minute expiry
await this.redis.set(cacheKey, JSON.stringify(data), {
EX: 300 // 5 minutes
});
return data;
} catch (error) {
console.error('Error fetching tickets:', {
error: error.message,
params,
response: error.response?.data
});
throw error;
}
}
}
module.exports = new GorgiasService();

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,26 @@
{
"name": "klaviyo-server",
"version": "1.0.0",
"description": "Klaviyo API integration server",
"main": "server.js",
"type": "module",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js"
},
"dependencies": {
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"esm": "^3.2.25",
"express": "^4.18.2",
"express-rate-limit": "^7.5.0",
"ioredis": "^5.4.1",
"luxon": "^3.5.0",
"node-fetch": "^3.3.2",
"pg": "^8.18.0",
"recharts": "^2.15.0"
},
"devDependencies": {
"nodemon": "^3.0.2"
}
}

View File

@@ -0,0 +1,71 @@
import express from 'express';
import { CampaignsService } from '../services/campaigns.service.js';
import { TimeManager } from '../utils/time.utils.js';
export function createCampaignsRouter(apiKey, apiRevision) {
const router = express.Router();
const timeManager = new TimeManager();
const campaignsService = new CampaignsService(apiKey, apiRevision);
// Get campaigns with optional filtering
router.get('/', async (req, res) => {
try {
const params = {
pageSize: parseInt(req.query.pageSize) || 50,
sort: req.query.sort || '-send_time',
status: req.query.status,
startDate: req.query.startDate,
endDate: req.query.endDate,
pageCursor: req.query.pageCursor
};
console.log('[Campaigns Route] Fetching campaigns with params:', params);
const data = await campaignsService.getCampaigns(params);
console.log('[Campaigns Route] Success:', {
count: data.data?.length || 0
});
res.json(data);
} catch (error) {
console.error('[Campaigns Route] Error:', error);
res.status(500).json({
status: 'error',
message: error.message,
details: error.response?.data || null
});
}
});
// Get campaigns by time range
router.get('/:timeRange', async (req, res) => {
try {
const { timeRange } = req.params;
const { status } = req.query;
let result;
if (timeRange === 'custom') {
const { startDate, endDate } = req.query;
if (!startDate || !endDate) {
return res.status(400).json({ error: 'Custom range requires startDate and endDate' });
}
result = await campaignsService.getCampaigns({
startDate,
endDate,
status
});
} else {
result = await campaignsService.getCampaignsByTimeRange(
timeRange,
{ status }
);
}
res.json(result);
} catch (error) {
console.error("[Campaigns Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
return router;
}

View File

@@ -0,0 +1,480 @@
import express from 'express';
import { EventsService } from '../services/events.service.js';
import { TimeManager } from '../utils/time.utils.js';
import { RedisService } from '../services/redis.service.js';
// Import METRIC_IDS from events service
const METRIC_IDS = {
PLACED_ORDER: 'Y8cqcF',
SHIPPED_ORDER: 'VExpdL',
ACCOUNT_CREATED: 'TeeypV',
CANCELED_ORDER: 'YjVMNg',
NEW_BLOG_POST: 'YcxeDr',
PAYMENT_REFUNDED: 'R7XUYh'
};
export function createEventsRouter(apiKey, apiRevision) {
const router = express.Router();
const timeManager = new TimeManager();
const eventsService = new EventsService(apiKey, apiRevision);
const redisService = new RedisService();
// Get events with optional filtering
router.get('/', async (req, res) => {
try {
const params = {
pageSize: parseInt(req.query.pageSize) || 50,
sort: req.query.sort || '-datetime',
metricId: req.query.metricId,
startDate: req.query.startDate,
endDate: req.query.endDate,
pageCursor: req.query.pageCursor,
fields: {}
};
// Parse fields parameter if provided
if (req.query.fields) {
try {
params.fields = JSON.parse(req.query.fields);
} catch (e) {
console.warn('[Events Route] Invalid fields parameter:', e);
}
}
console.log('[Events Route] Fetching events with params:', params);
const data = await eventsService.getEvents(params);
console.log('[Events Route] Success:', {
count: data.data?.length || 0,
included: data.included?.length || 0
});
res.json(data);
} catch (error) {
console.error('[Events Route] Error:', error);
res.status(500).json({
status: 'error',
message: error.message,
details: error.response?.data || null
});
}
});
// Get events by time range
router.get('/by-time/:timeRange', async (req, res) => {
try {
const { timeRange } = req.params;
const { metricId, startDate, endDate } = req.query;
let result;
if (timeRange === 'custom') {
if (!startDate || !endDate) {
return res.status(400).json({ error: 'Custom range requires startDate and endDate' });
}
const range = timeManager.getCustomRange(startDate, endDate);
if (!range) {
return res.status(400).json({ error: 'Invalid date range' });
}
result = await eventsService.getEvents({
metricId,
startDate: range.start.toISO(),
endDate: range.end.toISO()
});
} else {
result = await eventsService.getEventsByTimeRange(
timeRange,
{ metricId }
);
}
res.json(result);
} catch (error) {
console.error("[Events Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
// Get comprehensive statistics for a time period
router.get('/stats', async (req, res) => {
try {
const { timeRange, startDate, endDate } = req.query;
console.log('[Events Route] Stats request:', {
timeRange,
startDate,
endDate
});
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else if (timeRange) {
range = timeManager.getDateRange(timeRange);
} else {
return res.status(400).json({ error: 'Must provide either timeRange or startDate and endDate' });
}
if (!range) {
return res.status(400).json({ error: 'Invalid time range' });
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO()
};
console.log('[Events Route] Calculating period stats with params:', params);
const stats = await eventsService.calculatePeriodStats(params);
console.log('[Events Route] Stats response:', {
timeRange: {
start: range.start.toISO(),
end: range.end.toISO()
},
shippedCount: stats?.shipping?.shippedCount,
totalOrders: stats?.orderCount
});
res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
stats
});
} catch (error) {
console.error("[Events Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
// Add new route for smart revenue projection
router.get('/projection', async (req, res) => {
try {
const { timeRange, startDate, endDate } = req.query;
console.log('[Events Route] Projection request:', {
timeRange,
startDate,
endDate
});
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else if (timeRange) {
range = timeManager.getDateRange(timeRange);
} else {
return res.status(400).json({ error: 'Must provide either timeRange or startDate and endDate' });
}
if (!range) {
return res.status(400).json({ error: 'Invalid time range' });
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO()
};
// Try to get from cache first with a short TTL
const cacheKey = redisService._getCacheKey('projection', params);
const cachedData = await redisService.get(cacheKey);
if (cachedData) {
console.log('[Events Route] Cache hit for projection');
return res.json(cachedData);
}
console.log('[Events Route] Calculating smart projection with params:', params);
const projection = await eventsService.calculateSmartProjection(params);
// Cache the results with a short TTL (5 minutes)
await redisService.set(cacheKey, projection, 300);
res.json(projection);
} catch (error) {
console.error("[Events Route] Error calculating projection:", error);
res.status(500).json({ error: error.message });
}
});
// Add new route for detailed stats
router.get('/stats/details', async (req, res) => {
try {
const { timeRange, startDate, endDate, metric, daily = false } = req.query;
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else if (timeRange) {
range = timeManager.getDateRange(timeRange);
} else {
return res.status(400).json({ error: 'Must provide either timeRange or startDate and endDate' });
}
if (!range) {
return res.status(400).json({ error: 'Invalid time range' });
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO(),
metric,
daily: daily === 'true' || daily === true
};
// Try to get from cache first
const cacheKey = redisService._getCacheKey('stats:details', params);
const cachedData = await redisService.get(cacheKey);
if (cachedData) {
console.log('[Events Route] Cache hit for detailed stats');
return res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
stats: cachedData
});
}
const stats = await eventsService.calculateDetailedStats(params);
// Cache the results
const ttl = redisService._getTTL(timeRange);
await redisService.set(cacheKey, stats, ttl);
res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
stats
});
} catch (error) {
console.error("[Events Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
// Get product statistics for a time period
router.get('/products', async (req, res) => {
try {
const { timeRange, startDate, endDate } = req.query;
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else if (timeRange) {
range = timeManager.getDateRange(timeRange);
} else {
return res.status(400).json({ error: 'Must provide either timeRange or startDate and endDate' });
}
if (!range) {
return res.status(400).json({ error: 'Invalid time range' });
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO()
};
// Try to get from cache first
const cacheKey = redisService._getCacheKey('events', params);
const cachedData = await redisService.getEventData('products', params);
if (cachedData) {
console.log('[Events Route] Cache hit for products');
return res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
stats: {
products: cachedData
}
});
}
const stats = await eventsService.calculatePeriodStats(params);
res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
stats
});
} catch (error) {
console.error("[Events Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
// Get event feed (multiple event types sorted by time)
router.get('/feed', async (req, res) => {
try {
const { timeRange, startDate, endDate, metricIds } = req.query;
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else if (timeRange) {
range = timeManager.getDateRange(timeRange);
} else {
return res.status(400).json({ error: 'Must provide either timeRange or startDate and endDate' });
}
if (!range) {
return res.status(400).json({ error: 'Invalid time range' });
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO(),
metricIds: metricIds ? JSON.parse(metricIds) : null
};
const result = await eventsService.getMultiMetricEvents(params);
res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
...result
});
} catch (error) {
console.error("[Events Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
// Get aggregated events data
router.get('/aggregate', async (req, res) => {
try {
const { timeRange, startDate, endDate, interval = 'day', metricId, property } = req.query;
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else if (timeRange) {
range = timeManager.getDateRange(timeRange);
} else {
return res.status(400).json({ error: 'Must provide either timeRange or startDate and endDate' });
}
if (!range) {
return res.status(400).json({ error: 'Invalid time range' });
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO(),
metricId,
interval,
property
};
const result = await eventsService.getEvents(params);
const groupedData = timeManager.groupEventsByInterval(result.data, interval, property);
res.json({
timeRange: {
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
},
data: groupedData
});
} catch (error) {
console.error("[Events Route] Error:", error);
res.status(500).json({ error: error.message });
}
});
// Get date range for a given time period
router.get("/dateRange", async (req, res) => {
try {
const { timeRange, startDate, endDate } = req.query;
let range;
if (startDate && endDate) {
range = timeManager.getCustomRange(startDate, endDate);
} else {
range = timeManager.getDateRange(timeRange || 'today');
}
if (!range) {
return res.status(400).json({
error: "Invalid time range parameters"
});
}
res.json({
start: range.start.toISO(),
end: range.end.toISO(),
displayStart: timeManager.formatForDisplay(range.start),
displayEnd: timeManager.formatForDisplay(range.end)
});
} catch (error) {
console.error('Error getting date range:', error);
res.status(500).json({
error: "Failed to get date range"
});
}
});
// Clear cache for a specific time range
router.post("/clearCache", async (req, res) => {
try {
const { timeRange, startDate, endDate } = req.body;
await redisService.clearCache({ timeRange, startDate, endDate });
res.json({ message: "Cache cleared successfully" });
} catch (error) {
console.error('Error clearing cache:', error);
res.status(500).json({ error: "Failed to clear cache" });
}
});
// Add new batch metrics endpoint
router.get('/batch', async (req, res) => {
try {
const { timeRange, startDate, endDate, metrics } = req.query;
// Parse metrics array from query
const metricsList = metrics ? JSON.parse(metrics) : [];
const params = timeRange === 'custom'
? { startDate, endDate, metrics: metricsList }
: { timeRange, metrics: metricsList };
const results = await eventsService.getBatchMetrics(params);
res.json(results);
} catch (error) {
console.error('[Events Route] Error in batch request:', error);
res.status(500).json({ error: error.message });
}
});
return router;
}

View File

@@ -0,0 +1,17 @@
import express from 'express';
import { createEventsRouter } from './events.routes.js';
import { createMetricsRoutes } from './metrics.routes.js';
import { createCampaignsRouter } from './campaigns.routes.js';
import { createReportingRouter } from './reporting.routes.js';
export function createApiRouter(apiKey, apiRevision) {
const router = express.Router();
// Mount routers
router.use('/events', createEventsRouter(apiKey, apiRevision));
router.use('/metrics', createMetricsRoutes(apiKey, apiRevision));
router.use('/campaigns', createCampaignsRouter(apiKey, apiRevision));
router.use('/reporting', createReportingRouter(apiKey, apiRevision));
return router;
}

View File

@@ -0,0 +1,29 @@
import express from 'express';
import { MetricsService } from '../services/metrics.service.js';
const router = express.Router();
export function createMetricsRoutes(apiKey, apiRevision) {
const metricsService = new MetricsService(apiKey, apiRevision);
// Get all metrics
router.get('/', async (req, res) => {
try {
console.log('[Metrics Route] Fetching metrics');
const data = await metricsService.getMetrics();
console.log('[Metrics Route] Success:', {
count: data.data?.length || 0
});
res.json(data);
} catch (error) {
console.error('[Metrics Route] Error:', error);
res.status(500).json({
status: 'error',
message: error.message,
details: error.response?.data || null
});
}
});
return router;
}

View File

@@ -0,0 +1,29 @@
import express from 'express';
import { ReportingService } from '../services/reporting.service.js';
import { TimeManager } from '../utils/time.utils.js';
export function createReportingRouter(apiKey, apiRevision) {
const router = express.Router();
const reportingService = new ReportingService(apiKey, apiRevision);
const timeManager = new TimeManager();
// Get campaign reports by time range
router.get('/campaigns/:timeRange', async (req, res) => {
try {
const { timeRange } = req.params;
const { channel } = req.query;
const reports = await reportingService.getCampaignReports({
timeRange,
channel
});
res.json(reports);
} catch (error) {
console.error('[ReportingRoutes] Error fetching campaign reports:', error);
res.status(500).json({ error: error.message });
}
});
return router;
}

View File

@@ -0,0 +1,30 @@
-- Stores individual product links found in Klaviyo campaign emails
CREATE TABLE IF NOT EXISTS klaviyo_campaign_products (
id SERIAL PRIMARY KEY,
campaign_id TEXT NOT NULL,
campaign_name TEXT,
sent_at TIMESTAMPTZ,
pid BIGINT NOT NULL,
product_url TEXT,
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(campaign_id, pid)
);
CREATE INDEX IF NOT EXISTS idx_kcp_campaign_id ON klaviyo_campaign_products(campaign_id);
CREATE INDEX IF NOT EXISTS idx_kcp_pid ON klaviyo_campaign_products(pid);
CREATE INDEX IF NOT EXISTS idx_kcp_sent_at ON klaviyo_campaign_products(sent_at);
-- Stores non-product shop links (categories, filters, etc.) found in campaigns
CREATE TABLE IF NOT EXISTS klaviyo_campaign_links (
id SERIAL PRIMARY KEY,
campaign_id TEXT NOT NULL,
campaign_name TEXT,
sent_at TIMESTAMPTZ,
link_url TEXT NOT NULL,
link_type TEXT, -- 'category', 'brand', 'filter', 'clearance', 'deals', 'other'
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(campaign_id, link_url)
);
CREATE INDEX IF NOT EXISTS idx_kcl_campaign_id ON klaviyo_campaign_links(campaign_id);
CREATE INDEX IF NOT EXISTS idx_kcl_sent_at ON klaviyo_campaign_links(sent_at);

View File

@@ -0,0 +1,279 @@
/**
* Extract products featured in Klaviyo campaign emails and store in DB.
*
* - Fetches recent sent campaigns from Klaviyo API
* - Gets template HTML for each campaign message
* - Parses out product links (/shop/{id}) and other shop links
* - Inserts into klaviyo_campaign_products and klaviyo_campaign_links tables
*
* Usage: node scripts/poc-campaign-products.js [limit] [offset]
* limit: number of sent campaigns to process (default: 10)
* offset: number of sent campaigns to skip before processing (default: 0)
*
* Requires DB_* env vars (from inventory-server .env) and KLAVIYO_API_KEY.
*/
import fetch from 'node-fetch';
import pg from 'pg';
import dotenv from 'dotenv';
import path from 'path';
import fs from 'fs';
import { fileURLToPath } from 'url';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
// Load klaviyo .env for API key
dotenv.config({ path: path.resolve(__dirname, '../.env') });
// Also load the main inventory-server .env for DB credentials
const mainEnvPath = '/var/www/html/inventory/.env';
if (fs.existsSync(mainEnvPath)) {
dotenv.config({ path: mainEnvPath });
}
const API_KEY = process.env.KLAVIYO_API_KEY;
const REVISION = process.env.KLAVIYO_API_REVISION || '2026-01-15';
const BASE_URL = 'https://a.klaviyo.com/api';
if (!API_KEY) {
console.error('KLAVIYO_API_KEY not set in .env');
process.exit(1);
}
// ── Klaviyo API helpers ──────────────────────────────────────────────
const headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': `Klaviyo-API-Key ${API_KEY}`,
'revision': REVISION,
};
async function klaviyoGet(endpoint, params = {}) {
const url = new URL(`${BASE_URL}${endpoint}`);
for (const [k, v] of Object.entries(params)) {
url.searchParams.append(k, v);
}
return klaviyoFetch(url.toString());
}
async function klaviyoFetch(url) {
const res = await fetch(url, { headers });
if (!res.ok) {
const body = await res.text();
throw new Error(`Klaviyo ${res.status} on ${url}: ${body}`);
}
return res.json();
}
async function getRecentCampaigns(limit, offset = 0) {
const campaigns = [];
const messageMap = {};
let skipped = 0;
let data = await klaviyoGet('/campaigns', {
'filter': 'equals(messages.channel,"email")',
'sort': '-scheduled_at',
'include': 'campaign-messages',
});
while (true) {
for (const c of (data.data || [])) {
if (c.attributes?.status === 'Sent') {
if (skipped < offset) {
skipped++;
continue;
}
campaigns.push(c);
if (campaigns.length >= limit) break;
}
}
for (const inc of (data.included || [])) {
if (inc.type === 'campaign-message') {
messageMap[inc.id] = inc;
}
}
const nextUrl = data.links?.next;
if (campaigns.length >= limit || !nextUrl) break;
const progress = skipped < offset
? `Skipped ${skipped}/${offset}...`
: `Fetched ${campaigns.length}/${limit} sent campaigns, loading next page...`;
console.log(` ${progress}`);
await new Promise(r => setTimeout(r, 200));
data = await klaviyoFetch(nextUrl);
}
return { campaigns: campaigns.slice(0, limit), messageMap };
}
async function getTemplateHtml(messageId) {
const data = await klaviyoGet(`/campaign-messages/${messageId}/template`, {
'fields[template]': 'html,name',
});
return {
templateId: data.data?.id,
templateName: data.data?.attributes?.name,
html: data.data?.attributes?.html || '',
};
}
// ── HTML parsing ─────────────────────────────────────────────────────
function parseProductsFromHtml(html) {
const seen = new Set();
const products = [];
const linkRegex = /href="([^"]*acherryontop\.com\/shop\/(\d+))[^"]*"/gi;
let match;
while ((match = linkRegex.exec(html)) !== null) {
const productId = match[2];
if (!seen.has(productId)) {
seen.add(productId);
products.push({
siteProductId: productId,
url: match[1],
});
}
}
const categoryLinks = [];
const catRegex = /href="([^"]*acherryontop\.com\/shop\/[^"]+)"/gi;
while ((match = catRegex.exec(html)) !== null) {
const url = match[1];
if (/\/shop\/\d+$/.test(url)) continue;
if (!categoryLinks.includes(url)) categoryLinks.push(url);
}
return { products, categoryLinks };
}
function classifyLink(url) {
if (/\/shop\/(new|pre-order|backinstock)/.test(url)) return 'filter';
if (/\/shop\/company\//.test(url)) return 'brand';
if (/\/shop\/clearance/.test(url)) return 'clearance';
if (/\/shop\/daily_deals/.test(url)) return 'deals';
if (/\/shop\/category\//.test(url)) return 'category';
return 'other';
}
// ── Database ─────────────────────────────────────────────────────────
function createPool() {
return new pg.Pool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
port: process.env.DB_PORT || 5432,
ssl: process.env.DB_SSL === 'true' ? { rejectUnauthorized: false } : false,
});
}
async function insertProducts(pool, campaignId, campaignName, sentAt, products) {
if (products.length === 0) return 0;
let inserted = 0;
for (const p of products) {
try {
await pool.query(
`INSERT INTO klaviyo_campaign_products
(campaign_id, campaign_name, sent_at, pid, product_url)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (campaign_id, pid) DO NOTHING`,
[campaignId, campaignName, sentAt, parseInt(p.siteProductId), p.url]
);
inserted++;
} catch (err) {
console.error(` Error inserting product ${p.siteProductId}: ${err.message}`);
}
}
return inserted;
}
async function insertLinks(pool, campaignId, campaignName, sentAt, links) {
if (links.length === 0) return 0;
let inserted = 0;
for (const url of links) {
try {
await pool.query(
`INSERT INTO klaviyo_campaign_links
(campaign_id, campaign_name, sent_at, link_url, link_type)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (campaign_id, link_url) DO NOTHING`,
[campaignId, campaignName, sentAt, url, classifyLink(url)]
);
inserted++;
} catch (err) {
console.error(` Error inserting link: ${err.message}`);
}
}
return inserted;
}
// ── Main ─────────────────────────────────────────────────────────────
async function main() {
const limit = parseInt(process.argv[2]) || 10;
const offset = parseInt(process.argv[3]) || 0;
const pool = createPool();
try {
// Fetch campaigns
console.log(`Fetching up to ${limit} recent campaigns (offset: ${offset})...\n`);
const { campaigns, messageMap } = await getRecentCampaigns(limit, offset);
console.log(`Found ${campaigns.length} sent campaigns.\n`);
let totalProducts = 0;
let totalLinks = 0;
for (const campaign of campaigns) {
const name = campaign.attributes?.name || 'Unnamed';
const sentAt = campaign.attributes?.send_time;
console.log(`━━━ ${name} (${sentAt?.slice(0, 10) || 'no date'}) ━━━`);
const msgIds = (campaign.relationships?.['campaign-messages']?.data || [])
.map(r => r.id);
if (msgIds.length === 0) {
console.log(' No messages.\n');
continue;
}
for (const msgId of msgIds) {
const msg = messageMap[msgId];
const subject = msg?.attributes?.definition?.content?.subject;
if (subject) console.log(` Subject: ${subject}`);
try {
const template = await getTemplateHtml(msgId);
const { products, categoryLinks } = parseProductsFromHtml(template.html);
const pInserted = await insertProducts(pool, campaign.id, name, sentAt, products);
const lInserted = await insertLinks(pool, campaign.id, name, sentAt, categoryLinks);
console.log(` ${products.length} products (${pInserted} new), ${categoryLinks.length} links (${lInserted} new)`);
totalProducts += pInserted;
totalLinks += lInserted;
await new Promise(r => setTimeout(r, 200));
} catch (err) {
console.log(` Error: ${err.message}`);
}
}
console.log('');
}
console.log(`Done. Inserted ${totalProducts} product rows, ${totalLinks} link rows.`);
} finally {
await pool.end();
}
}
main().catch(err => {
console.error('Fatal error:', err);
process.exit(1);
});

View File

@@ -0,0 +1,78 @@
import express from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import rateLimit from 'express-rate-limit';
import { createApiRouter } from './routes/index.js';
import path from 'path';
import { fileURLToPath } from 'url';
// Get directory name in ES modules
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Load environment variables
const envPath = path.resolve(__dirname, '.env');
console.log('[Server] Loading .env file from:', envPath);
dotenv.config({ path: envPath });
// Debug environment variables (without exposing sensitive data)
console.log('[Server] Environment variables loaded:', {
REDIS_HOST: process.env.REDIS_HOST || '(not set)',
REDIS_PORT: process.env.REDIS_PORT || '(not set)',
REDIS_USERNAME: process.env.REDIS_USERNAME || '(not set)',
REDIS_PASSWORD: process.env.REDIS_PASSWORD ? '(set)' : '(not set)',
NODE_ENV: process.env.NODE_ENV || '(not set)',
});
const app = express();
const port = process.env.KLAVIYO_PORT || 3004;
// Rate limiting for reporting endpoints
const reportingLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: 'Too many requests to reporting endpoint, please try again later',
keyGenerator: (req) => {
// Use a combination of IP and endpoint for more granular control
return `${req.ip}-reporting`;
},
skip: (req) => {
// Only apply to campaign-values-reports endpoint
return !req.path.includes('campaign-values-reports');
}
});
// Middleware
app.use(cors());
app.use(express.json());
// Debug middleware to log all requests
app.use((req, res, next) => {
console.log(`[${new Date().toISOString()}] ${req.method} ${req.url}`);
next();
});
// Apply rate limiting to reporting endpoints
app.use('/api/klaviyo/reporting', reportingLimiter);
// Create and mount API routes
const apiRouter = createApiRouter(
process.env.KLAVIYO_API_KEY,
process.env.KLAVIYO_API_REVISION || '2024-02-15'
);
app.use('/api/klaviyo', apiRouter);
// Error handling middleware
app.use((err, req, res, next) => {
console.error('Unhandled error:', err);
res.status(500).json({
status: 'error',
message: 'Internal server error',
details: process.env.NODE_ENV === 'development' ? err.message : undefined
});
});
// Start server
app.listen(port, '0.0.0.0', () => {
console.log(`Klaviyo server listening at http://0.0.0.0:${port}`);
});

View File

@@ -0,0 +1,206 @@
import fetch from 'node-fetch';
import { TimeManager } from '../utils/time.utils.js';
import { RedisService } from './redis.service.js';
export class CampaignsService {
constructor(apiKey, apiRevision) {
this.apiKey = apiKey;
this.apiRevision = apiRevision;
this.baseUrl = 'https://a.klaviyo.com/api';
this.timeManager = new TimeManager();
this.redisService = new RedisService();
}
async getCampaigns(params = {}) {
try {
// Add request debouncing
const requestKey = JSON.stringify(params);
if (this._pendingRequests && this._pendingRequests[requestKey]) {
return this._pendingRequests[requestKey];
}
// Try to get from cache first
const cacheKey = this.redisService._getCacheKey('campaigns', params);
let cachedData = null;
try {
cachedData = await this.redisService.get(`${cacheKey}:raw`);
if (cachedData) {
return cachedData;
}
} catch (cacheError) {
console.warn('[CampaignsService] Cache error:', cacheError);
}
this._pendingRequests = this._pendingRequests || {};
this._pendingRequests[requestKey] = (async () => {
let allCampaigns = [];
let nextCursor = params.pageCursor;
let pageCount = 0;
const filter = params.filter || this._buildFilter(params);
do {
const queryParams = new URLSearchParams();
if (filter) {
queryParams.append('filter', filter);
}
queryParams.append('sort', params.sort || '-send_time');
if (nextCursor) {
queryParams.append('page[cursor]', nextCursor);
}
const url = `${this.baseUrl}/campaigns?${queryParams.toString()}`;
try {
const response = await fetch(url, {
method: 'GET',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': `Klaviyo-API-Key ${this.apiKey}`,
'revision': this.apiRevision
}
});
if (!response.ok) {
const errorData = await response.json();
console.error('[CampaignsService] API Error:', errorData);
throw new Error(`Klaviyo API error: ${response.status} ${response.statusText}`);
}
const responseData = await response.json();
allCampaigns = allCampaigns.concat(responseData.data || []);
pageCount++;
nextCursor = responseData.links?.next ?
new URL(responseData.links.next).searchParams.get('page[cursor]') : null;
if (nextCursor) {
await new Promise(resolve => setTimeout(resolve, 50));
}
} catch (fetchError) {
console.error('[CampaignsService] Fetch error:', fetchError);
throw fetchError;
}
} while (nextCursor);
const transformedCampaigns = this._transformCampaigns(allCampaigns);
const result = {
data: transformedCampaigns,
meta: {
total_count: transformedCampaigns.length,
page_count: pageCount
}
};
try {
const ttl = this.redisService._getTTL(params.timeRange);
await this.redisService.set(`${cacheKey}:raw`, result, ttl);
} catch (cacheError) {
console.warn('[CampaignsService] Cache set error:', cacheError);
}
delete this._pendingRequests[requestKey];
return result;
})();
return await this._pendingRequests[requestKey];
} catch (error) {
console.error('[CampaignsService] Error fetching campaigns:', error);
throw error;
}
}
_buildFilter(params) {
const filters = [];
if (params.startDate && params.endDate) {
const startUtc = this.timeManager.formatForAPI(params.startDate);
const endUtc = this.timeManager.formatForAPI(params.endDate);
filters.push(`greater-or-equal(send_time,${startUtc})`);
filters.push(`less-than(send_time,${endUtc})`);
}
if (params.status) {
filters.push(`equals(status,"${params.status}")`);
}
if (params.customFilters) {
filters.push(...params.customFilters);
}
return filters.length > 0 ? (filters.length > 1 ? `and(${filters.join(',')})` : filters[0]) : null;
}
async getCampaignsByTimeRange(timeRange, options = {}) {
const range = this.timeManager.getDateRange(timeRange);
if (!range) {
throw new Error('Invalid time range specified');
}
const params = {
timeRange,
startDate: range.start.toISO(),
endDate: range.end.toISO(),
...options
};
// Try to get from cache first
const cacheKey = this.redisService._getCacheKey('campaigns', params);
let cachedData = null;
try {
cachedData = await this.redisService.get(`${cacheKey}:raw`);
if (cachedData) {
return cachedData;
}
} catch (cacheError) {
console.warn('[CampaignsService] Cache error:', cacheError);
}
return this.getCampaigns(params);
}
_transformCampaigns(campaigns) {
if (!Array.isArray(campaigns)) {
console.warn('[CampaignsService] Campaigns is not an array:', campaigns);
return [];
}
return campaigns.map(campaign => {
try {
const stats = campaign.attributes?.campaign_message?.stats || {};
return {
id: campaign.id,
name: campaign.attributes?.name || "Unnamed Campaign",
subject: campaign.attributes?.campaign_message?.subject || "",
send_time: campaign.attributes?.send_time,
stats: {
delivery_rate: stats.delivery_rate || 0,
delivered: stats.delivered || 0,
recipients: stats.recipients || 0,
open_rate: stats.open_rate || 0,
opens_unique: stats.opens_unique || 0,
opens: stats.opens || 0,
clicks_unique: stats.clicks_unique || 0,
click_rate: stats.click_rate || 0,
click_to_open_rate: stats.click_to_open_rate || 0,
conversion_value: stats.conversion_value || 0,
conversion_uniques: stats.conversion_uniques || 0
}
};
} catch (error) {
console.error('[CampaignsService] Error transforming campaign:', error, campaign);
return {
id: campaign.id || 'unknown',
name: 'Error Processing Campaign',
stats: {}
};
}
});
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,38 @@
import fetch from 'node-fetch';
export class MetricsService {
constructor(apiKey, apiRevision) {
this.apiKey = apiKey;
this.apiRevision = apiRevision;
this.baseUrl = 'https://a.klaviyo.com/api';
}
async getMetrics() {
try {
const response = await fetch(`${this.baseUrl}/metrics/`, {
headers: {
'Authorization': `Klaviyo-API-Key ${this.apiKey}`,
'revision': this.apiRevision,
'Content-Type': 'application/json',
'Accept': 'application/json'
}
});
if (!response.ok) {
const errorData = await response.json();
console.error('[MetricsService] API Error:', errorData);
throw new Error(`Klaviyo API error: ${response.status} ${response.statusText}`);
}
const data = await response.json();
// Sort the results by name before returning
if (data.data) {
data.data.sort((a, b) => a.attributes.name.localeCompare(b.attributes.name));
}
return data;
} catch (error) {
console.error('[MetricsService] Error fetching metrics:', error);
throw error;
}
}
}

View File

@@ -0,0 +1,262 @@
import Redis from 'ioredis';
import { TimeManager } from '../utils/time.utils.js';
import dotenv from 'dotenv';
import path from 'path';
import { fileURLToPath } from 'url';
// Get directory name in ES modules
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Load environment variables again (redundant but safe)
const envPath = path.resolve(__dirname, '../.env');
console.log('[RedisService] Loading .env file from:', envPath);
dotenv.config({ path: envPath });
export class RedisService {
constructor() {
this.timeManager = new TimeManager();
this.DEFAULT_TTL = 5 * 60; // 5 minutes default TTL
this.isConnected = false;
this._initializeRedis();
}
_initializeRedis() {
try {
// Debug: Print all environment variables we're looking for
console.log('[RedisService] Environment variables state:', {
REDIS_HOST: process.env.REDIS_HOST ? '(set)' : '(not set)',
REDIS_PORT: process.env.REDIS_PORT ? '(set)' : '(not set)',
REDIS_USERNAME: process.env.REDIS_USERNAME ? '(set)' : '(not set)',
REDIS_PASSWORD: process.env.REDIS_PASSWORD ? '(set)' : '(not set)',
});
// Log Redis configuration (without password)
const host = process.env.REDIS_HOST || 'localhost';
const port = parseInt(process.env.REDIS_PORT) || 6379;
const username = process.env.REDIS_USERNAME || 'default';
const password = process.env.REDIS_PASSWORD;
console.log('[RedisService] Initializing Redis with config:', {
host,
port,
username,
hasPassword: !!password
});
const config = {
host,
port,
username,
retryStrategy: (times) => {
const delay = Math.min(times * 50, 2000);
return delay;
},
maxRetriesPerRequest: 3,
enableReadyCheck: true,
connectTimeout: 10000,
showFriendlyErrorStack: true,
retryUnfulfilled: true,
maxRetryAttempts: 5
};
// Only add password if it exists
if (password) {
console.log('[RedisService] Adding password to config');
config.password = password;
} else {
console.warn('[RedisService] No Redis password found in environment variables!');
}
this.client = new Redis(config);
// Handle connection events
this.client.on('connect', () => {
console.log('[RedisService] Connected to Redis');
this.isConnected = true;
});
this.client.on('ready', () => {
console.log('[RedisService] Redis is ready');
this.isConnected = true;
});
this.client.on('error', (err) => {
console.error('[RedisService] Redis error:', err);
this.isConnected = false;
// Log more details about the error
if (err.code === 'WRONGPASS') {
console.error('[RedisService] Authentication failed. Please check your Redis password.');
}
});
this.client.on('close', () => {
console.log('[RedisService] Redis connection closed');
this.isConnected = false;
});
this.client.on('reconnecting', (params) => {
console.log('[RedisService] Reconnecting to Redis:', params);
});
} catch (error) {
console.error('[RedisService] Error initializing Redis:', error);
this.isConnected = false;
}
}
async get(key) {
if (!this.isConnected) {
return null;
}
try {
const data = await this.client.get(key);
return data ? JSON.parse(data) : null;
} catch (error) {
console.error('[RedisService] Error getting data:', error);
return null;
}
}
async set(key, data, ttl = this.DEFAULT_TTL) {
if (!this.isConnected) {
return;
}
try {
await this.client.setex(key, ttl, JSON.stringify(data));
} catch (error) {
console.error('[RedisService] Error setting data:', error);
}
}
// Helper to generate cache keys
_getCacheKey(type, params = {}) {
const {
timeRange,
startDate,
endDate,
metricId,
metric,
daily,
cacheKey,
isPreviousPeriod,
customFilters
} = params;
let key = `klaviyo:${type}`;
// Handle "stats:details" for daily or metric-based keys
if (type === 'stats:details') {
// Add metric to key
key += `:${metric || 'all'}`;
// Add daily flag if present
if (daily) {
key += ':daily';
}
// Add custom filters hash if present
if (customFilters?.length) {
const filterHash = customFilters.join('').replace(/[^a-zA-Z0-9]/g, '');
key += `:${filterHash}`;
}
}
// If a specific cache key is provided, use it (highest priority)
if (cacheKey) {
key += `:${cacheKey}`;
}
// Otherwise, build a default cache key
else if (timeRange) {
key += `:${timeRange}`;
if (metricId) {
key += `:${metricId}`;
}
if (isPreviousPeriod) {
key += ':prev';
}
} else if (startDate && endDate) {
// For custom date ranges, include both dates in the key
key += `:custom:${startDate}:${endDate}`;
if (metricId) {
key += `:${metricId}`;
}
if (isPreviousPeriod) {
key += ':prev';
}
}
// Add order type to key if present
if (['pre_orders', 'local_pickup', 'on_hold'].includes(metric)) {
key += `:${metric}`;
}
return key;
}
// Get TTL based on time range
_getTTL(timeRange) {
const TTL_MAP = {
'today': 2 * 60, // 2 minutes
'yesterday': 30 * 60, // 30 minutes
'thisWeek': 5 * 60, // 5 minutes
'lastWeek': 60 * 60, // 1 hour
'thisMonth': 10 * 60, // 10 minutes
'lastMonth': 2 * 60 * 60, // 2 hours
'last7days': 5 * 60, // 5 minutes
'last30days': 15 * 60, // 15 minutes
'custom': 15 * 60 // 15 minutes
};
return TTL_MAP[timeRange] || this.DEFAULT_TTL;
}
async getEventData(type, params) {
if (!this.isConnected) {
return null;
}
try {
const baseKey = this._getCacheKey('events', params);
const data = await this.get(`${baseKey}:${type}`);
return data;
} catch (error) {
console.error('[RedisService] Error getting event data:', error);
return null;
}
}
async cacheEventData(type, params, data) {
if (!this.isConnected) {
return;
}
try {
const ttl = this._getTTL(params.timeRange);
const baseKey = this._getCacheKey('events', params);
// Cache raw event data
await this.set(`${baseKey}:${type}`, data, ttl);
} catch (error) {
console.error('[RedisService] Error caching event data:', error);
}
}
async clearCache(params = {}) {
if (!this.isConnected) {
return;
}
try {
const pattern = this._getCacheKey('events', params) + '*';
const keys = await this.client.keys(pattern);
if (keys.length > 0) {
await this.client.del(...keys);
}
} catch (error) {
console.error('[RedisService] Error clearing cache:', error);
}
}
}

View File

@@ -0,0 +1,254 @@
import fetch from 'node-fetch';
import { TimeManager } from '../utils/time.utils.js';
import { RedisService } from './redis.service.js';
const METRIC_IDS = {
PLACED_ORDER: 'Y8cqcF'
};
export class ReportingService {
constructor(apiKey, apiRevision) {
this.apiKey = apiKey;
this.apiRevision = apiRevision;
this.baseUrl = 'https://a.klaviyo.com/api';
this.timeManager = new TimeManager();
this.redisService = new RedisService();
this._pendingReportRequest = null;
}
async getCampaignReports(params = {}) {
try {
// Check if there's a pending request
if (this._pendingReportRequest) {
console.log('[ReportingService] Using pending campaign report request');
return this._pendingReportRequest;
}
// Try to get from cache first
const cacheKey = this.redisService._getCacheKey('campaign_reports', params);
let cachedData = null;
try {
cachedData = await this.redisService.get(`${cacheKey}:raw`);
if (cachedData) {
console.log('[ReportingService] Using cached campaign report data');
return cachedData;
}
} catch (cacheError) {
console.warn('[ReportingService] Cache error:', cacheError);
}
// Create new request promise
this._pendingReportRequest = (async () => {
console.log('[ReportingService] Fetching fresh campaign report data');
const range = this.timeManager.getDateRange(params.timeRange || 'last30days');
// Determine which channels to fetch based on params
const channelsToFetch = params.channel === 'all' || !params.channel
? ['email', 'sms']
: [params.channel];
const allResults = [];
// Fetch each channel
for (const channel of channelsToFetch) {
const payload = {
data: {
type: "campaign-values-report",
attributes: {
timeframe: {
start: range.start.toISO(),
end: range.end.toISO()
},
statistics: [
"delivery_rate",
"delivered",
"recipients",
"open_rate",
"opens_unique",
"opens",
"click_rate",
"clicks_unique",
"click_to_open_rate",
"conversion_value",
"conversion_uniques"
],
conversion_metric_id: METRIC_IDS.PLACED_ORDER,
filter: `equals(send_channel,"${channel}")`
}
}
};
const response = await fetch(`${this.baseUrl}/campaign-values-reports`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': `Klaviyo-API-Key ${this.apiKey}`,
'revision': this.apiRevision
},
body: JSON.stringify(payload)
});
if (!response.ok) {
const errorData = await response.json();
console.error('[ReportingService] API Error:', errorData);
throw new Error(`Klaviyo API error: ${response.status} ${response.statusText}`);
}
const reportData = await response.json();
console.log(`[ReportingService] Raw ${channel} report data:`, JSON.stringify(reportData, null, 2));
// Get campaign IDs from the report
const campaignIds = reportData.data?.attributes?.results?.map(result =>
result.groupings?.campaign_id
).filter(Boolean) || [];
if (campaignIds.length > 0) {
// Get campaign details including send time and subject lines
const campaignDetails = await this.getCampaignDetails(campaignIds);
// Process results for this channel
const channelResults = reportData.data.attributes.results.map(result => {
const campaignId = result.groupings.campaign_id;
const details = campaignDetails.find(detail => detail.id === campaignId);
return {
id: campaignId,
name: details.attributes.name,
subject: details.attributes.subject,
send_time: details.attributes.send_time,
channel: channel, // Use the channel we're currently processing
stats: {
delivery_rate: result.statistics.delivery_rate,
delivered: result.statistics.delivered,
recipients: result.statistics.recipients,
open_rate: result.statistics.open_rate,
opens_unique: result.statistics.opens_unique,
opens: result.statistics.opens,
click_rate: result.statistics.click_rate,
clicks_unique: result.statistics.clicks_unique,
click_to_open_rate: result.statistics.click_to_open_rate,
conversion_value: result.statistics.conversion_value,
conversion_uniques: result.statistics.conversion_uniques
}
};
});
allResults.push(...channelResults);
}
}
// Sort all results by date
const enrichedData = {
data: allResults.sort((a, b) => {
const dateA = new Date(a.send_time);
const dateB = new Date(b.send_time);
return dateB - dateA; // Sort by date descending
})
};
console.log('[ReportingService] Enriched data:', JSON.stringify(enrichedData, null, 2));
// Cache the enriched response for 10 minutes
try {
await this.redisService.set(`${cacheKey}:raw`, enrichedData, 600);
} catch (cacheError) {
console.warn('[ReportingService] Cache set error:', cacheError);
}
return enrichedData;
})();
const result = await this._pendingReportRequest;
this._pendingReportRequest = null;
return result;
} catch (error) {
console.error('[ReportingService] Error fetching campaign reports:', error);
this._pendingReportRequest = null;
throw error;
}
}
async getCampaignDetails(campaignIds = []) {
if (!Array.isArray(campaignIds) || campaignIds.length === 0) {
return [];
}
const fetchWithTimeout = async (campaignId, retries = 3) => {
for (let i = 0; i < retries; i++) {
try {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 10000); // 10 second timeout
const response = await fetch(
`${this.baseUrl}/campaigns/${campaignId}?include=campaign-messages`,
{
headers: {
'Accept': 'application/json',
'Authorization': `Klaviyo-API-Key ${this.apiKey}`,
'revision': this.apiRevision
},
signal: controller.signal
}
);
clearTimeout(timeoutId);
if (!response.ok) {
throw new Error(`Failed to fetch campaign ${campaignId}: ${response.status}`);
}
const data = await response.json();
if (!data.data) {
throw new Error(`Invalid response for campaign ${campaignId}`);
}
const message = data.included?.find(item => item.type === 'campaign-message');
console.log('[ReportingService] Campaign details for ID:', campaignId, {
send_channel: data.data.attributes.send_channel,
raw_attributes: data.data.attributes
});
return {
id: data.data.id,
type: data.data.type,
attributes: {
...data.data.attributes,
name: data.data.attributes.name,
send_time: data.data.attributes.send_time,
subject: message?.attributes?.content?.subject,
send_channel: data.data.attributes.send_channel || 'email'
}
};
} catch (error) {
if (i === retries - 1) throw error;
await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1))); // Exponential backoff
}
}
};
// Process in smaller chunks to avoid overwhelming the API
const chunkSize = 10;
const campaignDetails = [];
for (let i = 0; i < campaignIds.length; i += chunkSize) {
const chunk = campaignIds.slice(i, i + chunkSize);
const results = await Promise.all(
chunk.map(id => fetchWithTimeout(id).catch(error => {
console.error(`Failed to fetch campaign ${id}:`, error);
return null;
}))
);
campaignDetails.push(...results.filter(Boolean));
if (i + chunkSize < campaignIds.length) {
await new Promise(resolve => setTimeout(resolve, 1000)); // 1 second delay between chunks
}
}
return campaignDetails;
}
}

View File

@@ -0,0 +1,448 @@
import { DateTime } from 'luxon';
export class TimeManager {
constructor(dayStartHour = 1) {
this.timezone = 'America/New_York';
this.dayStartHour = dayStartHour; // Hour (0-23) when the business day starts
this.weekStartDay = 7; // 7 = Sunday in Luxon
}
/**
* Get the start of the current business day
* If current time is before dayStartHour, return previous day at dayStartHour
*/
getDayStart(dt = this.getNow()) {
if (!dt.isValid) {
console.error("[TimeManager] Invalid datetime provided to getDayStart");
return this.getNow();
}
const dayStart = dt.set({ hour: this.dayStartHour, minute: 0, second: 0, millisecond: 0 });
return dt.hour < this.dayStartHour ? dayStart.minus({ days: 1 }) : dayStart;
}
/**
* Get the end of the current business day
* End is defined as dayStartHour - 1 minute on the next day
*/
getDayEnd(dt = this.getNow()) {
if (!dt.isValid) {
console.error("[TimeManager] Invalid datetime provided to getDayEnd");
return this.getNow();
}
const nextDay = this.getDayStart(dt).plus({ days: 1 });
return nextDay.minus({ minutes: 1 });
}
/**
* Get the start of the week containing the given date
* Aligns with custom day start time and starts on Sunday
*/
getWeekStart(dt = this.getNow()) {
if (!dt.isValid) {
console.error("[TimeManager] Invalid datetime provided to getWeekStart");
return this.getNow();
}
// Set to start of week (Sunday) and adjust hour
const weekStart = dt.set({ weekday: this.weekStartDay }).startOf('day');
// If the week start time would be after the given time, go back a week
if (weekStart > dt) {
return weekStart.minus({ weeks: 1 }).set({ hour: this.dayStartHour });
}
return weekStart.set({ hour: this.dayStartHour });
}
/**
* Convert any date input to a Luxon DateTime in Eastern time
*/
toDateTime(date) {
if (!date) return null;
if (date instanceof DateTime) {
return date.setZone(this.timezone);
}
// If it's an ISO string or Date object, parse it
const dt = DateTime.fromISO(date instanceof Date ? date.toISOString() : date);
if (!dt.isValid) {
console.error("[TimeManager] Invalid date input:", date);
return null;
}
return dt.setZone(this.timezone);
}
/**
* Format a date for API requests (UTC ISO string)
*/
formatForAPI(date) {
if (!date) return null;
// Parse the input date
const dt = this.toDateTime(date);
if (!dt || !dt.isValid) {
console.error("[TimeManager] Invalid date for API:", date);
return null;
}
// Convert to UTC for API request
const utc = dt.toUTC();
console.log("[TimeManager] API date conversion:", {
input: date,
eastern: dt.toISO(),
utc: utc.toISO(),
offset: dt.offset
});
return utc.toISO();
}
/**
* Format a date for display (in Eastern time)
*/
formatForDisplay(date) {
const dt = this.toDateTime(date);
if (!dt || !dt.isValid) return '';
return dt.toFormat('LLL d, yyyy h:mm a');
}
/**
* Validate if a date range is valid
*/
isValidDateRange(start, end) {
const startDt = this.toDateTime(start);
const endDt = this.toDateTime(end);
return startDt && endDt && endDt > startDt;
}
/**
* Get the current time in Eastern timezone
*/
getNow() {
return DateTime.now().setZone(this.timezone);
}
/**
* Get a date range for the last N hours
*/
getLastNHours(hours) {
const now = this.getNow();
return {
start: now.minus({ hours }),
end: now
};
}
/**
* Get a date range for the last N days
* Aligns with custom day start time
*/
getLastNDays(days) {
const now = this.getNow();
const dayStart = this.getDayStart(now);
return {
start: dayStart.minus({ days }),
end: this.getDayEnd(now)
};
}
/**
* Get a date range for a specific time period
* All ranges align with custom day start time
*/
getDateRange(period) {
const now = this.getNow();
// Normalize period to handle both 'last' and 'previous' prefixes
const normalizedPeriod = period.startsWith('previous') ? period.replace('previous', 'last') : period;
switch (normalizedPeriod) {
case 'custom': {
// Custom ranges are handled separately via getCustomRange
console.warn('[TimeManager] Custom ranges should use getCustomRange method');
return null;
}
case 'today': {
const dayStart = this.getDayStart(now);
return {
start: dayStart,
end: this.getDayEnd(now)
};
}
case 'yesterday': {
const yesterday = now.minus({ days: 1 });
return {
start: this.getDayStart(yesterday),
end: this.getDayEnd(yesterday)
};
}
case 'last7days': {
// For last 7 days, we want to include today and the previous 6 days
const dayStart = this.getDayStart(now);
const weekStart = dayStart.minus({ days: 6 });
return {
start: weekStart,
end: this.getDayEnd(now)
};
}
case 'last30days': {
// Include today and previous 29 days
const dayStart = this.getDayStart(now);
const monthStart = dayStart.minus({ days: 29 });
return {
start: monthStart,
end: this.getDayEnd(now)
};
}
case 'last90days': {
// Include today and previous 89 days
const dayStart = this.getDayStart(now);
const start = dayStart.minus({ days: 89 });
return {
start,
end: this.getDayEnd(now)
};
}
case 'thisWeek': {
// Get the start of the week (Sunday) with custom hour
const weekStart = this.getWeekStart(now);
return {
start: weekStart,
end: this.getDayEnd(now)
};
}
case 'lastWeek': {
const lastWeek = now.minus({ weeks: 1 });
const weekStart = this.getWeekStart(lastWeek);
const weekEnd = weekStart.plus({ days: 6 }); // 6 days after start = Saturday
return {
start: weekStart,
end: this.getDayEnd(weekEnd)
};
}
case 'thisMonth': {
const dayStart = this.getDayStart(now);
const monthStart = dayStart.startOf('month').set({ hour: this.dayStartHour });
return {
start: monthStart,
end: this.getDayEnd(now)
};
}
case 'lastMonth': {
const lastMonth = now.minus({ months: 1 });
const monthStart = lastMonth.startOf('month').set({ hour: this.dayStartHour });
const monthEnd = monthStart.plus({ months: 1 }).minus({ days: 1 });
return {
start: monthStart,
end: this.getDayEnd(monthEnd)
};
}
default:
console.warn(`[TimeManager] Unknown period: ${period}`);
return null;
}
}
/**
* Format a duration in milliseconds to a human-readable string
*/
formatDuration(ms) {
return DateTime.fromMillis(ms).toFormat("hh'h' mm'm' ss's'");
}
/**
* Get relative time string (e.g., "2 hours ago")
*/
getRelativeTime(date) {
const dt = this.toDateTime(date);
if (!dt) return '';
return dt.toRelative();
}
/**
* Get a custom date range using exact dates and times provided
* @param {string} startDate - ISO string or Date for range start
* @param {string} endDate - ISO string or Date for range end
* @returns {Object} Object with start and end DateTime objects
*/
getCustomRange(startDate, endDate) {
if (!startDate || !endDate) {
console.error("[TimeManager] Custom range requires both start and end dates");
return null;
}
const start = this.toDateTime(startDate);
const end = this.toDateTime(endDate);
if (!start || !end || !start.isValid || !end.isValid) {
console.error("[TimeManager] Invalid dates provided for custom range");
return null;
}
// Validate the range
if (end < start) {
console.error("[TimeManager] End date must be after start date");
return null;
}
return {
start,
end
};
}
/**
* Get the previous period's date range based on the current period
* @param {string} period - The current period
* @param {DateTime} now - The current datetime (optional)
* @returns {Object} Object with start and end DateTime objects
*/
getPreviousPeriod(period, now = this.getNow()) {
const normalizedPeriod = period.startsWith('previous') ? period.replace('previous', 'last') : period;
switch (normalizedPeriod) {
case 'today': {
const yesterday = now.minus({ days: 1 });
return {
start: this.getDayStart(yesterday),
end: this.getDayEnd(yesterday)
};
}
case 'yesterday': {
const twoDaysAgo = now.minus({ days: 2 });
return {
start: this.getDayStart(twoDaysAgo),
end: this.getDayEnd(twoDaysAgo)
};
}
case 'last7days': {
const dayStart = this.getDayStart(now);
const currentStart = dayStart.minus({ days: 6 });
const prevEnd = currentStart.minus({ milliseconds: 1 });
const prevStart = prevEnd.minus({ days: 6 });
return {
start: prevStart,
end: prevEnd
};
}
case 'last30days': {
const dayStart = this.getDayStart(now);
const currentStart = dayStart.minus({ days: 29 });
const prevEnd = currentStart.minus({ milliseconds: 1 });
const prevStart = prevEnd.minus({ days: 29 });
return {
start: prevStart,
end: prevEnd
};
}
case 'last90days': {
const dayStart = this.getDayStart(now);
const currentStart = dayStart.minus({ days: 89 });
const prevEnd = currentStart.minus({ milliseconds: 1 });
const prevStart = prevEnd.minus({ days: 89 });
return {
start: prevStart,
end: prevEnd
};
}
case 'thisWeek': {
const weekStart = this.getWeekStart(now);
const prevEnd = weekStart.minus({ milliseconds: 1 });
const prevStart = this.getWeekStart(prevEnd);
return {
start: prevStart,
end: prevEnd
};
}
case 'lastWeek': {
const lastWeekStart = this.getWeekStart(now.minus({ weeks: 1 }));
const prevEnd = lastWeekStart.minus({ milliseconds: 1 });
const prevStart = this.getWeekStart(prevEnd);
return {
start: prevStart,
end: prevEnd
};
}
case 'thisMonth': {
const monthStart = now.startOf('month').set({ hour: this.dayStartHour });
const prevEnd = monthStart.minus({ milliseconds: 1 });
const prevStart = prevEnd.startOf('month').set({ hour: this.dayStartHour });
return {
start: prevStart,
end: prevEnd
};
}
case 'lastMonth': {
const lastMonthStart = now.minus({ months: 1 }).startOf('month').set({ hour: this.dayStartHour });
const prevEnd = lastMonthStart.minus({ milliseconds: 1 });
const prevStart = prevEnd.startOf('month').set({ hour: this.dayStartHour });
return {
start: prevStart,
end: prevEnd
};
}
default:
console.warn(`[TimeManager] No previous period defined for: ${period}`);
return null;
}
}
groupEventsByInterval(events, interval = 'day', property = null) {
if (!events?.length) return [];
const groupedData = new Map();
const now = DateTime.now().setZone('America/New_York');
for (const event of events) {
const datetime = DateTime.fromISO(event.attributes.datetime);
let groupKey;
switch (interval) {
case 'hour':
groupKey = datetime.startOf('hour').toISO();
break;
case 'day':
groupKey = datetime.startOf('day').toISO();
break;
case 'week':
groupKey = datetime.startOf('week').toISO();
break;
case 'month':
groupKey = datetime.startOf('month').toISO();
break;
default:
groupKey = datetime.startOf('day').toISO();
}
const existingGroup = groupedData.get(groupKey) || {
datetime: groupKey,
count: 0,
value: 0
};
existingGroup.count++;
if (property) {
// Extract property value from event
const props = event.attributes?.event_properties || event.attributes?.properties || {};
let value = 0;
if (property === '$value') {
// Special case for $value - use event value
value = Number(event.attributes?.value || 0);
} else {
// Otherwise get from properties
value = Number(props[property] || 0);
}
existingGroup.value = (existingGroup.value || 0) + value;
}
groupedData.set(groupKey, existingGroup);
}
// Convert to array and sort by datetime
return Array.from(groupedData.values())
.sort((a, b) => DateTime.fromISO(a.datetime) - DateTime.fromISO(b.datetime));
}
}

View File

@@ -0,0 +1,967 @@
{
"name": "meta-server",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "meta-server",
"version": "1.0.0",
"license": "ISC",
"dependencies": {
"axios": "^1.7.9",
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"express": "^4.21.2"
}
},
"node_modules/accepts": {
"version": "1.3.8",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz",
"integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==",
"license": "MIT",
"dependencies": {
"mime-types": "~2.1.34",
"negotiator": "0.6.3"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/array-flatten": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==",
"license": "MIT"
},
"node_modules/asynckit": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
"integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==",
"license": "MIT"
},
"node_modules/axios": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.12.2.tgz",
"integrity": "sha512-vMJzPewAlRyOgxV2dU0Cuz2O8zzzx9VYtbJOaBgXFeLc4IV/Eg50n4LowmehOOR61S8ZMpc2K5Sa7g6A4jfkUw==",
"license": "MIT",
"dependencies": {
"follow-redirects": "^1.15.6",
"form-data": "^4.0.4",
"proxy-from-env": "^1.1.0"
}
},
"node_modules/body-parser": {
"version": "1.20.3",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz",
"integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==",
"license": "MIT",
"dependencies": {
"bytes": "3.1.2",
"content-type": "~1.0.5",
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"on-finished": "2.4.1",
"qs": "6.13.0",
"raw-body": "2.5.2",
"type-is": "~1.6.18",
"unpipe": "1.0.0"
},
"engines": {
"node": ">= 0.8",
"npm": "1.2.8000 || >= 1.4.16"
}
},
"node_modules/bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
"integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.1.tgz",
"integrity": "sha512-BhYE+WDaywFg2TBWYNXAE+8B1ATnThNBqXHP5nQu0jWJdVvY2hvkpyB3qOmtmDePiS5/BDQ8wASEWGMWRG148g==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/call-bound": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.3.tgz",
"integrity": "sha512-YTd+6wGlNlPxSuri7Y6X8tY2dmm12UMH66RpKMhiX6rsk5wXXnYgbUcOt8kiS31/AjfoTOvCsE+w8nZQLQnzHA==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"get-intrinsic": "^1.2.6"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/combined-stream": {
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
"integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==",
"license": "MIT",
"dependencies": {
"delayed-stream": "~1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/content-disposition": {
"version": "0.5.4",
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz",
"integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==",
"license": "MIT",
"dependencies": {
"safe-buffer": "5.2.1"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/content-type": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz",
"integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/cookie": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/cookie-signature": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==",
"license": "MIT"
},
"node_modules/cors": {
"version": "2.8.5",
"resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz",
"integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==",
"license": "MIT",
"dependencies": {
"object-assign": "^4",
"vary": "^1"
},
"engines": {
"node": ">= 0.10"
}
},
"node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/delayed-stream": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
"integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==",
"license": "MIT",
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/depd": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz",
"integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/destroy": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz",
"integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==",
"license": "MIT",
"engines": {
"node": ">= 0.8",
"npm": "1.2.8000 || >= 1.4.16"
}
},
"node_modules/dotenv": {
"version": "16.4.7",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.4.7.tgz",
"integrity": "sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==",
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://dotenvx.com"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/ee-first": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==",
"license": "MIT"
},
"node_modules/encodeurl": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz",
"integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-object-atoms": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.0.0.tgz",
"integrity": "sha512-MZ4iQ6JwHOBQjahnjwaC1ZtIBH+2ohjamzAO3oaHcXYup7qxjF2fixyH+Q71voWHeOkI2q/TnJao/KfXYIZWbw==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/escape-html": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz",
"integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==",
"license": "MIT"
},
"node_modules/etag": {
"version": "1.8.1",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz",
"integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/express": {
"version": "4.21.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz",
"integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==",
"license": "MIT",
"dependencies": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
"body-parser": "1.20.3",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
"cookie": "0.7.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
"encodeurl": "~2.0.0",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"finalhandler": "1.3.1",
"fresh": "0.5.2",
"http-errors": "2.0.0",
"merge-descriptors": "1.0.3",
"methods": "~1.1.2",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"path-to-regexp": "0.1.12",
"proxy-addr": "~2.0.7",
"qs": "6.13.0",
"range-parser": "~1.2.1",
"safe-buffer": "5.2.1",
"send": "0.19.0",
"serve-static": "1.16.2",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"type-is": "~1.6.18",
"utils-merge": "1.0.1",
"vary": "~1.1.2"
},
"engines": {
"node": ">= 0.10.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/express"
}
},
"node_modules/finalhandler": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz",
"integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==",
"license": "MIT",
"dependencies": {
"debug": "2.6.9",
"encodeurl": "~2.0.0",
"escape-html": "~1.0.3",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"statuses": "2.0.1",
"unpipe": "~1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/follow-redirects": {
"version": "1.15.9",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.9.tgz",
"integrity": "sha512-gew4GsXizNgdoRyqmyfMHyAmXsZDk6mHkSxZFCzW9gwlbtOW44CDtYavM+y+72qD/Vq2l550kMF52DT8fOLJqQ==",
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/RubenVerborgh"
}
],
"license": "MIT",
"engines": {
"node": ">=4.0"
},
"peerDependenciesMeta": {
"debug": {
"optional": true
}
}
},
"node_modules/form-data": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"license": "MIT",
"dependencies": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/forwarded": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
"integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/fresh": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz",
"integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==",
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-intrinsic": {
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.6.tgz",
"integrity": "sha512-qxsEs+9A+u85HhllWJJFicJfPDhRmjzoYdl64aMWW9yRIJmSyxdn8IEkuIM530/7T+lv0TIHd8L6Q/ra0tEoeA==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"dunder-proto": "^1.0.0",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.0.0",
"function-bind": "^1.1.2",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"license": "MIT",
"dependencies": {
"has-symbols": "^1.0.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"license": "MIT",
"dependencies": {
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/http-errors": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz",
"integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==",
"license": "MIT",
"dependencies": {
"depd": "2.0.0",
"inherits": "2.0.4",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"toidentifier": "1.0.1"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/iconv-lite": {
"version": "0.4.24",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz",
"integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==",
"license": "MIT",
"dependencies": {
"safer-buffer": ">= 2.1.2 < 3"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/inherits": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
"license": "ISC"
},
"node_modules/ipaddr.js": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz",
"integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==",
"license": "MIT",
"engines": {
"node": ">= 0.10"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/media-typer": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/merge-descriptors": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz",
"integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==",
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/methods": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz",
"integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mime": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz",
"integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==",
"license": "MIT",
"bin": {
"mime": "cli.js"
},
"engines": {
"node": ">=4"
}
},
"node_modules/mime-db": {
"version": "1.52.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
"integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mime-types": {
"version": "2.1.35",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
"integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
"license": "MIT",
"dependencies": {
"mime-db": "1.52.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"license": "MIT"
},
"node_modules/negotiator": {
"version": "0.6.3",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz",
"integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/object-inspect": {
"version": "1.13.3",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.3.tgz",
"integrity": "sha512-kDCGIbxkDSXE3euJZZXzc6to7fCrKHNI/hSRQnRuQ+BWjFNzZwiFF8fj/6o2t2G9/jTj8PSIYTfCLelLZEeRpA==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/on-finished": {
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz",
"integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==",
"license": "MIT",
"dependencies": {
"ee-first": "1.1.1"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/parseurl": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz",
"integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/path-to-regexp": {
"version": "0.1.12",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz",
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==",
"license": "MIT"
},
"node_modules/proxy-addr": {
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz",
"integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==",
"license": "MIT",
"dependencies": {
"forwarded": "0.2.0",
"ipaddr.js": "1.9.1"
},
"engines": {
"node": ">= 0.10"
}
},
"node_modules/proxy-from-env": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz",
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"license": "MIT"
},
"node_modules/qs": {
"version": "6.13.0",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz",
"integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==",
"license": "BSD-3-Clause",
"dependencies": {
"side-channel": "^1.0.6"
},
"engines": {
"node": ">=0.6"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/range-parser": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz",
"integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/raw-body": {
"version": "2.5.2",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz",
"integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==",
"license": "MIT",
"dependencies": {
"bytes": "3.1.2",
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"unpipe": "1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
],
"license": "MIT"
},
"node_modules/safer-buffer": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==",
"license": "MIT"
},
"node_modules/send": {
"version": "0.19.0",
"resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz",
"integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==",
"license": "MIT",
"dependencies": {
"debug": "2.6.9",
"depd": "2.0.0",
"destroy": "1.2.0",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"fresh": "0.5.2",
"http-errors": "2.0.0",
"mime": "1.6.0",
"ms": "2.1.3",
"on-finished": "2.4.1",
"range-parser": "~1.2.1",
"statuses": "2.0.1"
},
"engines": {
"node": ">= 0.8.0"
}
},
"node_modules/send/node_modules/encodeurl": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
"integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/send/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"license": "MIT"
},
"node_modules/serve-static": {
"version": "1.16.2",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz",
"integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==",
"license": "MIT",
"dependencies": {
"encodeurl": "~2.0.0",
"escape-html": "~1.0.3",
"parseurl": "~1.3.3",
"send": "0.19.0"
},
"engines": {
"node": ">= 0.8.0"
}
},
"node_modules/setprototypeof": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz",
"integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==",
"license": "ISC"
},
"node_modules/side-channel": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz",
"integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"object-inspect": "^1.13.3",
"side-channel-list": "^1.0.0",
"side-channel-map": "^1.0.1",
"side-channel-weakmap": "^1.0.2"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/side-channel-list": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz",
"integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"object-inspect": "^1.13.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/side-channel-map": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz",
"integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==",
"license": "MIT",
"dependencies": {
"call-bound": "^1.0.2",
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.5",
"object-inspect": "^1.13.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/side-channel-weakmap": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz",
"integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==",
"license": "MIT",
"dependencies": {
"call-bound": "^1.0.2",
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.5",
"object-inspect": "^1.13.3",
"side-channel-map": "^1.0.1"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/statuses": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
"integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/toidentifier": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz",
"integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==",
"license": "MIT",
"engines": {
"node": ">=0.6"
}
},
"node_modules/type-is": {
"version": "1.6.18",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz",
"integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==",
"license": "MIT",
"dependencies": {
"media-typer": "0.3.0",
"mime-types": "~2.1.24"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
},
"node_modules/utils-merge": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==",
"license": "MIT",
"engines": {
"node": ">= 0.4.0"
}
},
"node_modules/vary": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz",
"integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==",
"license": "MIT",
"engines": {
"node": ">= 0.8"
}
}
}
}

View File

@@ -0,0 +1,20 @@
{
"name": "meta-server",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js",
"dev": "nodemon server.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"axios": "^1.7.9",
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"express": "^4.21.2"
}
}

View File

@@ -0,0 +1,91 @@
const express = require('express');
const router = express.Router();
const {
fetchCampaigns,
fetchAccountInsights,
updateCampaignBudget,
updateCampaignStatus,
} = require('../services/meta.service');
// Get all campaigns with insights
router.get('/campaigns', async (req, res) => {
try {
const { since, until } = req.query;
if (!since || !until) {
return res.status(400).json({ error: 'Date range is required (since, until)' });
}
const campaigns = await fetchCampaigns(since, until);
res.json(campaigns);
} catch (error) {
console.error('Campaign fetch error:', error);
res.status(500).json({
error: 'Failed to fetch campaigns',
details: error.response?.data?.error?.message || error.message,
});
}
});
// Get account insights
router.get('/account-insights', async (req, res) => {
try {
const { since, until } = req.query;
if (!since || !until) {
return res.status(400).json({ error: 'Date range is required (since, until)' });
}
const insights = await fetchAccountInsights(since, until);
res.json(insights);
} catch (error) {
console.error('Account insights fetch error:', error);
res.status(500).json({
error: 'Failed to fetch account insights',
details: error.response?.data?.error?.message || error.message,
});
}
});
// Update campaign budget
router.patch('/campaigns/:campaignId/budget', async (req, res) => {
try {
const { campaignId } = req.params;
const { budget } = req.body;
if (!budget) {
return res.status(400).json({ error: 'Budget is required' });
}
const result = await updateCampaignBudget(campaignId, budget);
res.json(result);
} catch (error) {
console.error('Budget update error:', error);
res.status(500).json({
error: 'Failed to update campaign budget',
details: error.response?.data?.error?.message || error.message,
});
}
});
// Update campaign status (pause/unpause)
router.post('/campaigns/:campaignId/:action', async (req, res) => {
try {
const { campaignId, action } = req.params;
if (!['pause', 'unpause'].includes(action)) {
return res.status(400).json({ error: 'Invalid action. Use "pause" or "unpause"' });
}
const result = await updateCampaignStatus(campaignId, action);
res.json(result);
} catch (error) {
console.error('Status update error:', error);
res.status(500).json({
error: 'Failed to update campaign status',
details: error.response?.data?.error?.message || error.message,
});
}
});
module.exports = router;

View File

@@ -0,0 +1,31 @@
const express = require('express');
const cors = require('cors');
const path = require('path');
require('dotenv').config({
path: path.resolve(__dirname, '.env')
});
const app = express();
const port = process.env.PORT || 3005;
app.use(cors());
app.use(express.json());
// Import routes
const campaignRoutes = require('./routes/campaigns.routes');
// Use routes
app.use('/api/meta', campaignRoutes);
// Error handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});
// Start server
app.listen(port, () => {
console.log(`Meta API server running on port ${port}`);
});
module.exports = app;

View File

@@ -0,0 +1,99 @@
const { default: axios } = require('axios');
const META_API_VERSION = process.env.META_API_VERSION || 'v21.0';
const META_API_BASE_URL = `https://graph.facebook.com/${META_API_VERSION}`;
const META_ACCESS_TOKEN = process.env.META_ACCESS_TOKEN;
const AD_ACCOUNT_ID = process.env.META_AD_ACCOUNT_ID;
const metaApiRequest = async (endpoint, params = {}) => {
try {
const response = await axios.get(`${META_API_BASE_URL}/${endpoint}`, {
params: {
access_token: META_ACCESS_TOKEN,
time_zone: 'America/New_York',
...params,
},
});
return response.data;
} catch (error) {
console.error('Meta API Error:', {
message: error.message,
response: error.response?.data,
endpoint,
});
throw error;
}
};
const fetchCampaigns = async (since, until) => {
const campaigns = await metaApiRequest(`act_${AD_ACCOUNT_ID}/campaigns`, {
fields: [
'id',
'name',
'status',
'objective',
'daily_budget',
'lifetime_budget',
'adsets{daily_budget,lifetime_budget}',
`insights.time_range({'since':'${since}','until':'${until}'}).level(campaign){
spend,
impressions,
clicks,
ctr,
reach,
frequency,
cpm,
cpc,
actions,
action_values,
cost_per_action_type
}`,
].join(','),
limit: 100,
});
return campaigns.data.filter(c => c.insights?.data?.[0]?.spend > 0);
};
const fetchAccountInsights = async (since, until) => {
const accountInsights = await metaApiRequest(`act_${AD_ACCOUNT_ID}/insights`, {
fields: 'reach,spend,impressions,clicks,ctr,cpm,actions,action_values',
time_range: JSON.stringify({ since, until }),
});
return accountInsights.data[0] || null;
};
const updateCampaignBudget = async (campaignId, budget) => {
try {
const response = await axios.post(`${META_API_BASE_URL}/${campaignId}`, {
access_token: META_ACCESS_TOKEN,
daily_budget: budget * 100, // Convert to cents
});
return response.data;
} catch (error) {
console.error('Update campaign budget error:', error);
throw error;
}
};
const updateCampaignStatus = async (campaignId, action) => {
try {
const status = action === 'pause' ? 'PAUSED' : 'ACTIVE';
const response = await axios.post(`${META_API_BASE_URL}/${campaignId}`, {
access_token: META_ACCESS_TOKEN,
status,
});
return response.data;
} catch (error) {
console.error('Update campaign status error:', error);
throw error;
}
};
module.exports = {
fetchCampaigns,
fetchAccountInsights,
updateCampaignBudget,
updateCampaignStatus,
};

View File

@@ -0,0 +1,24 @@
{
"name": "dashboard",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"dependencies": {
"dotenv": "^16.4.7"
}
},
"node_modules/dotenv": {
"version": "16.4.7",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.4.7.tgz",
"integrity": "sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==",
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://dotenvx.com"
}
}
}
}

View File

@@ -0,0 +1,5 @@
{
"dependencies": {
"dotenv": "^16.4.7"
}
}

View File

@@ -0,0 +1,13 @@
# Server Configuration
NODE_ENV=development
TYPEFORM_PORT=3008
# Redis Configuration
REDIS_URL=redis://localhost:6379
# Typeform API Configuration
TYPEFORM_ACCESS_TOKEN=your_typeform_access_token_here
# Optional: Form IDs (if you want to store them in env)
TYPEFORM_FORM_ID_1=your_first_form_id
TYPEFORM_FORM_ID_2=your_second_form_id

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
{
"name": "typeform-server",
"version": "1.0.0",
"description": "Typeform API integration server",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js"
},
"dependencies": {
"axios": "^1.6.2",
"cors": "^2.8.5",
"dotenv": "^16.3.1",
"express": "^4.18.2",
"redis": "^4.6.11"
},
"devDependencies": {
"nodemon": "^3.0.2"
}
}

View File

@@ -0,0 +1,121 @@
const express = require('express');
const router = express.Router();
const typeformService = require('../services/typeform.service');
// Get form responses
router.get('/forms/:formId/responses', async (req, res) => {
try {
const { formId } = req.params;
const filters = req.query;
console.log(`Fetching responses for form ${formId} with filters:`, filters);
if (!formId) {
return res.status(400).json({
error: 'Missing form ID',
details: 'The form ID parameter is required'
});
}
const data = await typeformService.getFormResponsesWithFilters(formId, filters);
if (!data) {
return res.status(404).json({
error: 'No data found',
details: `No responses found for form ${formId}`
});
}
res.json(data);
} catch (error) {
console.error('Form responses error:', {
formId: req.params.formId,
filters: req.query,
error: error.message,
stack: error.stack,
response: error.response?.data
});
// Handle specific error cases
if (error.response?.status === 401) {
return res.status(401).json({
error: 'Authentication failed',
details: 'Invalid Typeform API credentials'
});
}
if (error.response?.status === 404) {
return res.status(404).json({
error: 'Not found',
details: `Form '${req.params.formId}' not found`
});
}
if (error.response?.status === 400) {
return res.status(400).json({
error: 'Invalid request',
details: error.response?.data?.message || 'The request was invalid',
data: error.response?.data
});
}
res.status(500).json({
error: 'Failed to fetch form responses',
details: error.response?.data?.message || error.message,
data: error.response?.data
});
}
});
// Get form insights
router.get('/forms/:formId/insights', async (req, res) => {
try {
const { formId } = req.params;
if (!formId) {
return res.status(400).json({
error: 'Missing form ID',
details: 'The form ID parameter is required'
});
}
const data = await typeformService.getFormInsights(formId);
if (!data) {
return res.status(404).json({
error: 'No data found',
details: `No insights found for form ${formId}`
});
}
res.json(data);
} catch (error) {
console.error('Form insights error:', {
formId: req.params.formId,
error: error.message,
response: error.response?.data
});
if (error.response?.status === 401) {
return res.status(401).json({
error: 'Authentication failed',
details: 'Invalid Typeform API credentials'
});
}
if (error.response?.status === 404) {
return res.status(404).json({
error: 'Not found',
details: `Form '${req.params.formId}' not found`
});
}
res.status(500).json({
error: 'Failed to fetch form insights',
details: error.response?.data?.message || error.message,
data: error.response?.data
});
}
});
module.exports = router;

View File

@@ -0,0 +1,31 @@
const express = require('express');
const cors = require('cors');
const path = require('path');
require('dotenv').config({
path: path.resolve(__dirname, '.env')
});
const app = express();
const port = process.env.TYPEFORM_PORT || 3008;
app.use(cors());
app.use(express.json());
// Import routes
const typeformRoutes = require('./routes/typeform.routes');
// Use routes
app.use('/api/typeform', typeformRoutes);
// Error handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});
// Start server
app.listen(port, () => {
console.log(`Typeform API server running on port ${port}`);
});
module.exports = app;

View File

@@ -0,0 +1,142 @@
const axios = require('axios');
const { createClient } = require('redis');
class TypeformService {
constructor() {
this.redis = createClient({
url: process.env.REDIS_URL
});
this.redis.on('error', err => console.error('Redis Client Error:', err));
this.redis.connect().catch(err => console.error('Redis connection error:', err));
const token = process.env.TYPEFORM_ACCESS_TOKEN;
console.log('Initializing Typeform client with token:', token ? `${token.slice(0, 10)}...` : 'missing');
this.apiClient = axios.create({
baseURL: 'https://api.typeform.com',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// Test the token
this.testConnection();
}
async testConnection() {
try {
const response = await this.apiClient.get('/forms');
console.log('Typeform connection test successful:', {
status: response.status,
headers: response.headers,
});
} catch (error) {
console.error('Typeform connection test failed:', {
error: error.message,
response: error.response?.data,
status: error.response?.status,
});
}
}
async getFormResponses(formId, params = {}) {
const cacheKey = `typeform:responses:${formId}:${JSON.stringify(params)}`;
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log(`Form responses for ${formId} found in Redis cache`);
return JSON.parse(cachedData);
}
// Fetch from API
const response = await this.apiClient.get(`/forms/${formId}/responses`, { params });
const data = response.data;
// Save to Redis with 5 minute expiry
await this.redis.set(cacheKey, JSON.stringify(data), {
EX: 300 // 5 minutes
});
return data;
} catch (error) {
console.error(`Error fetching form responses for ${formId}:`, {
error: error.message,
params,
response: error.response?.data
});
throw error;
}
}
async getFormInsights(formId) {
const cacheKey = `typeform:insights:${formId}`;
try {
// Try Redis first
const cachedData = await this.redis.get(cacheKey);
if (cachedData) {
console.log(`Form insights for ${formId} found in Redis cache`);
return JSON.parse(cachedData);
}
// Log the request details
console.log(`Fetching insights for form ${formId}...`, {
url: `/insights/${formId}/summary`,
headers: this.apiClient.defaults.headers
});
// Fetch from API
const response = await this.apiClient.get(`/insights/${formId}/summary`);
console.log('Typeform insights response:', {
status: response.status,
headers: response.headers,
data: response.data
});
const data = response.data;
// Save to Redis with 5 minute expiry
await this.redis.set(cacheKey, JSON.stringify(data), {
EX: 300 // 5 minutes
});
return data;
} catch (error) {
console.error(`Error fetching form insights for ${formId}:`, {
error: error.message,
response: error.response?.data,
status: error.response?.status,
headers: error.response?.headers,
requestUrl: `/insights/${formId}/summary`,
requestHeaders: this.apiClient.defaults.headers
});
throw error;
}
}
async getFormResponsesWithFilters(formId, { since, until, pageSize = 25, ...otherParams } = {}) {
try {
const params = {
page_size: pageSize,
...otherParams
};
if (since) {
params.since = new Date(since).toISOString();
}
if (until) {
params.until = new Date(until).toISOString();
}
return await this.getFormResponses(formId, params);
} catch (error) {
console.error('Error in getFormResponsesWithFilters:', error);
throw error;
}
}
}
module.exports = new TypeformService();

View File

@@ -0,0 +1,196 @@
-- Create function for updating timestamps if it doesn't exist
CREATE OR REPLACE FUNCTION update_updated_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
-- Create function for updating updated_at timestamps
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
-- Drop tables in reverse order of dependency
DROP TABLE IF EXISTS public.settings_product CASCADE;
DROP TABLE IF EXISTS public.settings_vendor CASCADE;
DROP TABLE IF EXISTS public.settings_global CASCADE;
-- Table Definition: settings_global
CREATE TABLE public.settings_global (
setting_key VARCHAR PRIMARY KEY,
setting_value VARCHAR NOT NULL,
description TEXT,
updated_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Table Definition: settings_vendor
CREATE TABLE public.settings_vendor (
vendor VARCHAR PRIMARY KEY, -- Matches products.vendor
default_lead_time_days INT,
default_days_of_stock INT,
updated_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Index for faster lookups if needed (PK usually sufficient)
-- CREATE INDEX idx_settings_vendor_vendor ON public.settings_vendor(vendor);
-- Table Definition: settings_product
CREATE TABLE public.settings_product (
pid INT8 PRIMARY KEY,
lead_time_days INT, -- Overrides vendor/global
days_of_stock INT, -- Overrides vendor/global
safety_stock INT DEFAULT 0, -- Minimum desired stock level
forecast_method VARCHAR DEFAULT 'standard', -- e.g., 'standard', 'seasonal'
exclude_from_forecast BOOLEAN DEFAULT FALSE,
updated_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT fk_settings_product_pid FOREIGN KEY (pid) REFERENCES public.products(pid) ON DELETE CASCADE ON UPDATE CASCADE
);
-- Description: Inserts or updates standard default global settings.
-- Safe to rerun; will update existing keys with these default values.
-- Dependencies: `settings_global` table must exist.
-- Frequency: Run once initially, or rerun if you want to reset global defaults.
INSERT INTO public.settings_global (setting_key, setting_value, description) VALUES
('abc_revenue_threshold_a', '0.80', 'Revenue percentage for Class A (cumulative)'),
('abc_revenue_threshold_b', '0.95', 'Revenue percentage for Class B (cumulative)'),
('abc_calculation_basis', 'revenue_30d', 'Metric for ABC calc (revenue_30d, sales_30d, lifetime_revenue)'),
('abc_calculation_period', '30', 'Days period for ABC calculation if not lifetime'),
('default_forecast_method', 'standard', 'Default forecast method (standard, seasonal)'),
('default_lead_time_days', '14', 'Global default lead time in days'),
('default_days_of_stock', '30', 'Global default days of stock coverage target'),
-- Set default safety stock to 0 units. Can be overridden per product.
-- If you wanted safety stock in days, you'd store 'days' here and calculate units later.
('default_safety_stock_units', '0', 'Global default safety stock in units')
ON CONFLICT (setting_key) DO UPDATE SET
setting_value = EXCLUDED.setting_value,
description = EXCLUDED.description,
updated_at = CURRENT_TIMESTAMP; -- Update timestamp if default value changes
-- Description: Creates placeholder rows in `settings_vendor` for each unique vendor
-- found in the `products` table. Does NOT set specific overrides.
-- Safe to rerun; will NOT overwrite existing vendor settings.
-- Dependencies: `settings_vendor` table must exist, `products` table populated.
-- Frequency: Run once after initial product load, or periodically if new vendors are added.
INSERT INTO public.settings_vendor (
vendor,
default_lead_time_days,
default_days_of_stock
-- updated_at will use its default CURRENT_TIMESTAMP on insert
)
SELECT
DISTINCT p.vendor,
-- Explicitly cast NULL to INTEGER to resolve type mismatch
CAST(NULL AS INTEGER),
CAST(NULL AS INTEGER)
FROM
public.products p
WHERE
p.vendor IS NOT NULL
AND p.vendor <> '' -- Exclude blank vendors if necessary
ON CONFLICT (vendor) DO NOTHING; -- IMPORTANT: Do not overwrite existing vendor settings
SELECT COUNT(*) FROM public.settings_vendor; -- Verify rows were inserted
-- Description: Creates placeholder rows in `settings_product` for each unique product
-- found in the `products` table. Sets basic defaults but no specific overrides.
-- Safe to rerun; will NOT overwrite existing product settings.
-- Dependencies: `settings_product` table must exist, `products` table populated.
-- Frequency: Run once after initial product load, or periodically if new products are added.
INSERT INTO public.settings_product (
pid,
lead_time_days, -- NULL = Inherit from Vendor/Global
days_of_stock, -- NULL = Inherit from Vendor/Global
safety_stock, -- Default to 0 units initially
forecast_method, -- NULL = Inherit from Global ('standard')
exclude_from_forecast -- Default to FALSE
-- updated_at will use its default CURRENT_TIMESTAMP on insert
)
SELECT
p.pid,
CAST(NULL AS INTEGER), -- Explicitly cast NULL to INTEGER
CAST(NULL AS INTEGER), -- Explicitly cast NULL to INTEGER
COALESCE((SELECT setting_value::int FROM settings_global WHERE setting_key = 'default_safety_stock_units'), 0), -- Use global default safety stock units
CAST(NULL AS VARCHAR), -- Cast NULL to VARCHAR for forecast_method (already varchar, but explicit)
FALSE -- Default: Include in forecast
FROM
public.products p
ON CONFLICT (pid) DO NOTHING; -- IMPORTANT: Do not overwrite existing product-specific settings
-- History and status tables
CREATE TABLE IF NOT EXISTS calculate_history (
id BIGSERIAL PRIMARY KEY,
start_time TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
end_time TIMESTAMP WITH TIME ZONE NULL,
duration_seconds INTEGER,
duration_minutes DECIMAL(10,2) GENERATED ALWAYS AS (duration_seconds::decimal / 60.0) STORED,
total_products INTEGER DEFAULT 0,
total_orders INTEGER DEFAULT 0,
total_purchase_orders INTEGER DEFAULT 0,
processed_products INTEGER DEFAULT 0,
processed_orders INTEGER DEFAULT 0,
processed_purchase_orders INTEGER DEFAULT 0,
status calculation_status DEFAULT 'running',
error_message TEXT,
additional_info JSONB
);
CREATE TABLE IF NOT EXISTS calculate_status (
module_name text PRIMARY KEY,
last_calculation_timestamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS sync_status (
table_name TEXT PRIMARY KEY,
last_sync_timestamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
last_sync_id BIGINT
);
CREATE TABLE IF NOT EXISTS import_history (
id BIGSERIAL PRIMARY KEY,
table_name VARCHAR(50) NOT NULL,
start_time TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
end_time TIMESTAMP WITH TIME ZONE NULL,
duration_seconds INTEGER,
duration_minutes DECIMAL(10,2) GENERATED ALWAYS AS (duration_seconds::decimal / 60.0) STORED,
records_added INTEGER DEFAULT 0,
records_updated INTEGER DEFAULT 0,
records_deleted INTEGER DEFAULT 0,
records_skipped INTEGER DEFAULT 0,
total_processed INTEGER DEFAULT 0,
is_incremental BOOLEAN DEFAULT FALSE,
status calculation_status DEFAULT 'running',
error_message TEXT,
additional_info JSONB
);
-- Create all indexes after tables are fully created
CREATE INDEX IF NOT EXISTS idx_last_calc ON calculate_status(last_calculation_timestamp);
CREATE INDEX IF NOT EXISTS idx_last_sync ON sync_status(last_sync_timestamp);
CREATE INDEX IF NOT EXISTS idx_table_time ON import_history(table_name, start_time);
CREATE INDEX IF NOT EXISTS idx_import_history_status ON import_history(status);
CREATE INDEX IF NOT EXISTS idx_calculate_history_status ON calculate_history(status);
-- Add comments for documentation
COMMENT ON TABLE import_history IS 'Tracks history of data import operations with detailed statistics';
COMMENT ON COLUMN import_history.records_deleted IS 'Number of records deleted during this import';
COMMENT ON COLUMN import_history.records_skipped IS 'Number of records skipped (e.g., unchanged, invalid)';
COMMENT ON COLUMN import_history.total_processed IS 'Total number of records examined/processed, including skipped';
COMMENT ON TABLE calculate_history IS 'Tracks history of metrics calculation runs with performance data';
COMMENT ON COLUMN calculate_history.duration_seconds IS 'Total duration of the calculation in seconds';
COMMENT ON COLUMN calculate_history.additional_info IS 'JSON object containing step timings, row counts, and other detailed metrics';

View File

@@ -0,0 +1,17 @@
-- Daily Deals schema for local PostgreSQL
-- Synced from production MySQL product_daily_deals + product_current_prices
CREATE TABLE IF NOT EXISTS product_daily_deals (
deal_id serial PRIMARY KEY,
deal_date date NOT NULL,
pid bigint NOT NULL,
price_id bigint NOT NULL,
-- Denormalized from product_current_prices so we don't need to sync that whole table
deal_price numeric(10,3),
created_at timestamptz DEFAULT NOW(),
CONSTRAINT fk_daily_deals_pid FOREIGN KEY (pid) REFERENCES products(pid) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_daily_deals_date ON product_daily_deals(deal_date);
CREATE INDEX IF NOT EXISTS idx_daily_deals_pid ON product_daily_deals(pid);
CREATE UNIQUE INDEX IF NOT EXISTS idx_daily_deals_unique ON product_daily_deals(deal_date, pid);

Some files were not shown because too many files have changed in this diff Show More