5 Commits

Author SHA1 Message Date
00249f7c33 Clean up routes 2025-04-08 21:26:00 -04:00
f271f3aae4 Get frontend dashboard/analytics mostly loading data again 2025-04-08 00:02:43 -04:00
43f76e4ac0 Fix specific import calculations 2025-04-07 22:07:21 -04:00
92ff80fba2 Import and calculate tweaks and fixes 2025-04-06 17:12:36 -04:00
a4c1a19d2e Try to synchronize time zones across import 2025-04-05 16:20:43 -04:00
29 changed files with 2590 additions and 2816 deletions

271
docs/routes-cleanup.md Normal file
View File

@@ -0,0 +1,271 @@
**Analysis of Potential Issues**
1. **Obsolete Functionality:**
* **`config.js` Legacy Endpoints:** The endpoints `GET /config/`, `PUT /config/stock-thresholds/:id`, `PUT /config/lead-time-thresholds/:id`, `PUT /config/sales-velocity/:id`, `PUT /config/abc-classification/:id`, `PUT /config/safety-stock/:id`, and `PUT /config/turnover/:id` appear **highly likely to be obsolete**. They reference older, single-row config tables (`stock_thresholds`, etc.) while newer endpoints (`/config/global`, `/config/products`, `/config/vendors`) manage settings in more structured tables (`settings_global`, `settings_product`, `settings_vendor`). Unless specifically required for backward compatibility, these legacy endpoints should be removed to avoid confusion and potential data conflicts.
* **`analytics.js` Forecast Endpoint (`GET /analytics/forecast`):** This endpoint uses **MySQL syntax** (`DATEDIFF`, `DATE_FORMAT`, `JSON_OBJECT`, `?` placeholders) but seems intended to run within the analytics module which otherwise uses PostgreSQL (`req.app.locals.pool`, `date_trunc`, `::text`, `$1` placeholders). This endpoint is likely **obsolete or misplaced** and will not function correctly against the PostgreSQL database.
* **`csv.js` Redundant Actions:**
* `POST /csv/update` seems redundant with `POST /csv/full-update`. The latter uses the `runScript` helper and dedicated state (`activeFullUpdate`), appearing more robust. `/csv/update` might be older or incomplete.
* `POST /csv/reset` seems redundant with `POST /csv/full-reset`. Similar reasoning applies; `/csv/full-reset` appears preferred.
* **`products.js` Import Endpoint (`POST /products/import`):** This is **dangerous duplication**. The `/csv` module handles imports (`/csv/import`, `/csv/import-from-prod`) with locking (`activeImport`) to prevent concurrent operations. This endpoint lacks such locking and could corrupt data if run simultaneously with other CSV/reset operations. It should likely be removed.
* **`products.js` Metrics Endpoint (`GET /products/:id/metrics`):** This is redundant. The `/metrics/:pid` endpoint provides the same, possibly more comprehensive, data directly from the `product_metrics` table. Clients should use `/metrics/:pid` instead.
2. **Overlap or Inappropriate Duplication of Effort:**
* **AI Prompt Getters:** `GET /ai-prompts/type/general` and `GET /ai-prompts/type/system` could potentially be handled by adding a query parameter filter to `GET /ai-prompts/` (e.g., `GET /ai-prompts?prompt_type=general`). However, dedicated endpoints for single, specific items can sometimes be simpler. This is more of a design choice than a major issue.
* **Vendor Performance/Metrics:** There are multiple ways to get vendor performance data:
* `GET /analytics/vendors` (uses `vendor_metrics`)
* `GET /dashboard/vendor/performance` (uses `purchase_orders`)
* `GET /purchase-orders/vendor-metrics` (uses `purchase_orders`)
* `GET /vendors-aggregate/` (uses `vendor_metrics`, augmented with `purchase_orders`)
This suggests significant overlap. The `/vendors-aggregate` endpoint seems the most comprehensive, combining pre-aggregated data with some real-time info. The others, especially `/dashboard/vendor/performance` and `/purchase-orders/vendor-metrics` which calculate directly from `purchase_orders`, might be redundant or less performant.
* **Product Listing:**
* `GET /products/` lists products joining `products`, `product_metrics`, and `categories`.
* `GET /metrics/` lists products primarily from `product_metrics`.
They offer similar filtering/sorting. If `product_metrics` contains all necessary display fields, `GET /products/` might be partly redundant for simple listing views, although it does provide aggregated category names. Evaluate if both full list endpoints are necessary.
* **Image Uploads/Management:** Image handling is split:
* `products-import.js`: Uploads temporary images for product import to `/uploads/products/`, schedules deletion.
* `reusable-images.js`: Uploads persistent images to `/uploads/reusable/`, stores metadata in DB.
* `products-import.js` has `/check-file` and `/list-uploads` that can see *both* directories, while `reusable-images.js` has a `/check-file` that only sees its own. This separation could be confusing. Clarify the purpose and lifecycle of images in each directory.
* **Background Task Management (`csv.js`):** The use of `activeImport` for multiple unrelated tasks (import, reset, metrics calc) prevents concurrency, which might be too restrictive. The cancellation logic (`/cancel`) only targets `full-update`/`full-reset`, not tasks locked by `activeImport`. This needs unification.
* **Analytics/Dashboard Base Table Queries:** Several endpoints in `analytics.js` (`/pricing`, `/categories`) and `dashboard.js` (`/best-sellers`, `/sales/metrics`, `/trending/products`, `/key-metrics`, `/inventory-health`, `/sales-overview`) query base tables (`orders`, `products`, `purchase_orders`) directly, while many others leverage pre-aggregated `_metrics` tables. This inconsistency can lead to performance differences and suggests potential for optimization by using aggregates where possible.
3. **Obvious Mistakes / Data Issues:**
* **AI Prompt Fetching:** `GET /ai-prompts/company/:companyId`, `/type/general`, `/type/system` return `result.rows[0]`. This assumes uniqueness. If the underlying DB constraints (`unique_company_prompt`, etc.) fail or aren't present, this could silently hide data if multiple rows match. The use of unique constraint handling in POST/PUT suggests this is likely intended and safe *if* DB constraints are solid.
* **Mixed Databases & SSH Tunnels:** The heavy reliance in `ai_validation.js` and `products-import.js` on connecting to a production MySQL DB via SSH tunnel while also using a local PostgreSQL DB adds significant architectural complexity.
* **Inefficiency:** In `ai_validation.js` (`generateDebugResponse`), an SSH tunnel and MySQL connection (`promptTunnel`, `promptConnection`) are established but seem unused when fetching prompts (which correctly come from the PG pool `res.app.locals.pool`). This is wasted effort.
* **Improvement:** The `getDbConnection` function in `products-import.js` implements caching/pooling for the SSH/MySQL connection this is much better and should ideally be used consistently wherever the production DB is accessed (e.g., in `ai_validation.js`).
* **`products.js` Brand Filtering:** `GET /products/brands` filters brands based on having associated purchase orders with a cost >= 500. This seems arbitrary for a general list of brands and might return incomplete results depending on the use case.
* **Type Handling:** Ensure `parseValue` handles all required types and edge cases correctly, especially for filtering complex queries in `*-aggregate` and `metrics` routes. Explicit type casting in SQL (`::numeric`, `::text`, etc.) is generally good practice in PostgreSQL.
* **Dummy Data:** Several `dashboard.js` endpoints return hardcoded dummy data on errors or when no data is found. While this prevents UI crashes, it can mask real issues. Ensure logging is robust when fallbacks are used.
**Summary of Endpoints**
Here's a summary of the available endpoints, grouped by their likely file/module:
**1. AI Prompts (`ai_prompts.js`)**
* `GET /`: Get all AI prompts.
* `GET /:id`: Get a specific AI prompt by its ID.
* `GET /company/:companyId`: Get the AI prompt for a specific company (expects one). **(Deprecated)**
* `GET /type/general`: Get the general AI prompt (expects one). **(Deprecated)**
* `GET /type/system`: Get the system AI prompt (expects one). **(Deprecated)**
* `GET /by-type`: Get AI prompt by type (general, system, company_specific) with optional company parameter. **(New Consolidated Endpoint)**
* `POST /`: Create a new AI prompt.
* `PUT /:id`: Update an existing AI prompt.
* `DELETE /:id`: Delete an AI prompt.
**2. AI Validation (`ai_validation.js`)**
* `POST /debug`: Generate and view the structure of prompts and taxonomy data (for debugging, doesn't call OpenAI). Connects to Prod MySQL (taxonomy) and Local PG (prompts, performance).
* `POST /validate`: Validate product data using OpenAI. Connects to Prod MySQL (taxonomy) and Local PG (prompts, performance).
* `GET /test-taxonomy`: Test endpoint to query sample taxonomy data from Prod MySQL.
**3. Analytics (`analytics.js`)**
* `GET /stats`: Get overall business statistics from metrics tables.
* `GET /profit`: Get profit analysis data (by category, over time, top products) from metrics tables.
* `GET /vendors`: Get vendor performance analysis from `vendor_metrics`.
* `GET /stock`: Get stock analysis data (turnover, levels, critical items) from metrics tables.
* `GET /pricing`: Get pricing analysis (price points, elasticity, recommendations) - **uses `orders` table**.
* `GET /categories`: Get category performance analysis (revenue, profit, growth, distribution, trends) - **uses `orders` and `products` tables**.
* `GET /forecast`: (**Likely Obsolete/Broken**) Attempts to get forecast data using MySQL syntax.
**4. Brands Aggregate (`brands-aggregate.js`)**
* `GET /filter-options`: Get distinct brand names and statuses for UI filters (from `brand_metrics`).
* `GET /stats`: Get overall statistics related to brands (from `brand_metrics`).
* `GET /`: List brands with aggregated metrics, supporting filtering, sorting, pagination (from `brand_metrics`).
**5. Categories Aggregate (`categories-aggregate.js`)**
* `GET /filter-options`: Get distinct category types, statuses, and counts for UI filters (from `category_metrics` & `categories`).
* `GET /stats`: Get overall statistics related to categories (from `category_metrics` & `categories`).
* `GET /`: List categories with aggregated metrics, supporting filtering, sorting (incl. hierarchy), pagination (from `category_metrics` & `categories`).
**6. Configuration (`config.js`)**
* **(New)** `GET /global`: Get all global settings.
* **(New)** `PUT /global`: Update global settings.
* **(New)** `GET /products`: List product-specific settings with pagination/search.
* **(New)** `PUT /products/:pid`: Update/Create product-specific settings.
* **(New)** `POST /products/:pid/reset`: Reset product settings to defaults.
* **(New)** `GET /vendors`: List vendor-specific settings with pagination/search.
* **(New)** `PUT /vendors/:vendor`: Update/Create vendor-specific settings.
* **(New)** `POST /vendors/:vendor/reset`: Reset vendor settings to defaults.
* **(Legacy/Obsolete)** `GET /`: Get all config from old single-row tables.
* **(Legacy/Obsolete)** `PUT /stock-thresholds/:id`: Update old stock thresholds.
* **(Legacy/Obsolete)** `PUT /lead-time-thresholds/:id`: Update old lead time thresholds.
* **(Legacy/Obsolete)** `PUT /sales-velocity/:id`: Update old sales velocity config.
* **(Legacy/Obsolete)** `PUT /abc-classification/:id`: Update old ABC config.
* **(Legacy/Obsolete)** `PUT /safety-stock/:id`: Update old safety stock config.
* **(Legacy/Obsolete)** `PUT /turnover/:id`: Update old turnover config.
**7. CSV Operations & Background Tasks (`csv.js`)**
* `GET /:type/progress`: SSE endpoint for full update/reset progress.
* `GET /test`: Simple test endpoint.
* `GET /status`: Check status of the generic background task lock (`activeImport`).
* `GET /calculate-metrics/status`: Check status of metrics calculation.
* `GET /history/import`: Get recent import history.
* `GET /history/calculate`: Get recent metrics calculation history.
* `GET /status/modules`: Get last calculation time per module.
* `GET /status/tables`: Get last sync time per table.
* `GET /status/table-counts`: Get record counts for key tables.
* `POST /update`: (**Potentially Obsolete**) Trigger `update-csv.js` script.
* `POST /import`: Trigger `import-csv.js` script.
* `POST /cancel`: Cancel `/full-update` or `/full-reset` task.
* `POST /reset`: (**Potentially Obsolete**) Trigger `reset-db.js` script.
* `POST /reset-metrics`: Trigger `reset-metrics.js` script.
* `POST /calculate-metrics`: Trigger `calculate-metrics.js` script.
* `POST /import-from-prod`: Trigger `import-from-prod.js` script.
* `POST /full-update`: Trigger `full-update.js` script (preferred update).
* `POST /full-reset`: Trigger `full-reset.js` script (preferred reset).
**8. Dashboard (`dashboard.js`)**
* `GET /stock/metrics`: Get dashboard stock summary metrics & brand breakdown.
* `GET /purchase/metrics`: Get dashboard purchase order summary metrics & vendor breakdown.
* `GET /replenishment/metrics`: Get dashboard replenishment summary & top variants.
* `GET /forecast/metrics`: Get dashboard forecast summary, daily, and category breakdown.
* `GET /overstock/metrics`: Get dashboard overstock summary & category breakdown.
* `GET /overstock/products`: Get list of top overstocked products.
* `GET /best-sellers`: Get dashboard best-selling products, brands, categories - **uses `orders`, `products`**.
* `GET /sales/metrics`: Get dashboard sales summary for a period - **uses `orders`**.
* `GET /low-stock/products`: Get list of top low stock/critical products.
* `GET /trending/products`: Get list of trending products - **uses `orders`, `products`**.
* `GET /vendor/performance`: Get dashboard vendor performance details - **uses `purchase_orders`**.
* `GET /key-metrics`: Get dashboard summary KPIs - **uses multiple base tables**.
* `GET /inventory-health`: Get dashboard inventory health overview - **uses `products`, `product_metrics`**.
* `GET /replenish/products`: Get list of products needing replenishment (overlaps `/low-stock/products`).
* `GET /sales-overview`: Get daily sales totals for chart - **uses `orders`**.
**9. Product Import Utilities (`products-import.js`)**
* `POST /upload-image`: Upload temporary product image, schedule deletion.
* `DELETE /delete-image`: Delete temporary product image.
* `GET /field-options`: Get dropdown options for product fields from Prod MySQL (cached).
* `GET /product-lines/:companyId`: Get product lines for a company from Prod MySQL (cached).
* `GET /sublines/:lineId`: Get sublines for a line from Prod MySQL (cached).
* `GET /check-file/:filename`: Check existence/permissions of uploaded file (temp or reusable).
* `GET /list-uploads`: List files in upload directories.
* `GET /search-products`: Search products in Prod MySQL DB.
* `GET /check-upc-and-generate-sku`: Check UPC existence and generate SKU suggestion based on Prod MySQL data.
* `GET /product-categories/:pid`: Get assigned categories for a product from Prod MySQL.
**10. Product Metrics (`product-metrics.js`)**
* `GET /filter-options`: Get distinct filter values (vendor, brand, abcClass) from `product_metrics`.
* `GET /`: List detailed product metrics with filtering, sorting, pagination (primary data access).
* `GET /:pid`: Get full metrics record for a single product.
**11. Orders (`orders.js`)**
* `GET /`: List orders with summary info, filtering, sorting, pagination, and stats.
* `GET /:orderNumber`: Get details for a single order, including items.
**12. Products (`products.js`)**
* `GET /brands`: Get distinct brands (filtered by PO value).
* `GET /`: List products with core data + metrics, filtering, sorting, pagination.
* `GET /trending`: Get trending products based on `product_metrics`.
* `GET /:id`: Get details for a single product (core data + metrics).
* `POST /import`: (**Likely Obsolete/Dangerous**) Import products from CSV.
* `PUT /:id`: Update core product data.
* `GET /:id/metrics`: (**Redundant**) Get metrics for a single product.
* `GET /:id/time-series`: Get sales/PO history for a single product.
**13. Purchase Orders (`purchase-orders.js`)**
* `GET /`: List purchase orders with summary info, filtering, sorting, pagination, and summary stats.
* `GET /vendor-metrics`: Calculate vendor performance metrics from `purchase_orders`.
* `GET /cost-analysis`: Calculate cost analysis by category from `purchase_orders`.
* `GET /receiving-status`: Get summary counts based on PO receiving status.
* `GET /order-vs-received`: List product ordered vs. received quantities.
**14. Reusable Images (`reusable-images.js`)**
* `GET /`: List all reusable images.
* `GET /by-company/:companyId`: List global and company-specific images.
* `GET /global`: List only global images.
* `GET /:id`: Get a single reusable image record.
* `POST /upload`: Upload a new reusable image and create DB record.
* `PUT /:id`: Update reusable image metadata (name, global, company).
* `DELETE /:id`: Delete reusable image record and file.
* `GET /check-file/:filename`: Check existence/permissions of a reusable image file.
**15. Templates (`templates.js`)**
* `GET /`: List all product data templates.
* `GET /:company/:productType`: Get a specific template.
* `POST /`: Create a new template.
* `PUT /:id`: Update an existing template.
* `DELETE /:id`: Delete a template.
**16. Vendors Aggregate (`vendors-aggregate.js`)**
* `GET /filter-options`: Get distinct vendor names and statuses for UI filters (from `vendor_metrics`).
* `GET /stats`: Get overall statistics related to vendors (from `vendor_metrics` & `purchase_orders`).
* `GET /`: List vendors with aggregated metrics, supporting filtering, sorting, pagination (from `vendor_metrics` & `purchase_orders`).
**Recommendations:**
1. **Address Obsolete Endpoints:** Prioritize removing or confirming the necessity of the endpoints marked as obsolete/redundant (legacy config, `/analytics/forecast`, `/csv/update`, `/csv/reset`, `/products/import`, `/products/:id/metrics`).
2. **Consolidate Overlapping Functionality:** Review the multiple vendor performance and product listing endpoints. Decide on the primary method (e.g., using aggregate tables via `/vendors-aggregate` and `/metrics`) and refactor or remove the others. Clarify the image upload strategies.
3. **Standardize Data Access:** Decide whether `dashboard` and `analytics` endpoints should primarily use aggregate tables (like `/metrics`, `/brands-aggregate`, etc.) or if direct access to base tables is sometimes necessary. Aim for consistency and document the reasoning. Optimize queries hitting base tables if they must remain.
4. **Improve Background Task Management:** Refactor `csv.js` to use a unified locking mechanism (maybe separate locks per task type?) and a consistent cancellation strategy for all spawned/managed processes. Clarify the purpose of `update` vs `full-update` and `reset` vs `full-reset`.
5. **Optimize DB Connections:** Ensure the `getDbConnection` pooling/caching helper from `products-import.js` is used *consistently* across all modules interacting with the production MySQL database (especially `ai_validation.js`). Remove unnecessary tunnel creations.
6. **Review Data Integrity:** Double-check the assumptions made (e.g., uniqueness of AI prompts) and ensure database constraints enforce them. Review the `GET /products/brands` filtering logic.
## Changes Made
1. **Removed Obsolete Legacy Endpoints in `config.js`**:
- Removed `GET /config/` endpoint
- Removed `PUT /config/stock-thresholds/:id` endpoint
- Removed `PUT /config/lead-time-thresholds/:id` endpoint
- Removed `PUT /config/sales-velocity/:id` endpoint
- Removed `PUT /config/abc-classification/:id` endpoint
- Removed `PUT /config/safety-stock/:id` endpoint
- Removed `PUT /config/turnover/:id` endpoint
These endpoints were obsolete as they referenced older, single-row config tables that have been replaced by newer endpoints using the structured tables `settings_global`, `settings_product`, and `settings_vendor`.
2. **Removed MySQL Syntax `/forecast` Endpoint in `analytics.js`**:
- Removed `GET /analytics/forecast` endpoint that was using MySQL-specific syntax incompatible with the PostgreSQL database used elsewhere in the application.
3. **Renamed and Removed Redundant Endpoints**:
- Renamed `csv.js` to `data-management.js` while maintaining the same `/csv/*` endpoint paths for consistency
- Removed deprecated `/csv/update` endpoint (now fully replaced by `/csv/full-update`)
- Removed deprecated `/csv/reset` endpoint (now fully replaced by `/csv/full-reset`)
- Removed deprecated `/products/import` endpoint (now handled by `/csv/import`)
- Removed deprecated `/products/:id/metrics` endpoint (now handled by `/metrics/:pid`)
4. **Fixed Data Integrity Issues**:
- Improved `GET /products/brands` endpoint by removing the arbitrary filtering logic that was only showing brands with purchase orders that had a total cost of at least $500
- The updated endpoint now returns all distinct brands from visible products, providing more complete data
5. **Optimized Database Connections**:
- Created a new `dbConnection.js` utility file that encapsulates the optimized database connection management logic
- Improved the `ai-validation.js` file to use this shared connection management, eliminating unnecessary repeated tunnel creation
- Added proper connection pooling with timeout-based connection reuse, reducing the overhead of repeatedly creating SSH tunnels
- Added query result caching for frequently accessed data to improve performance
These changes improve maintainability by removing duplicate code, enhance consistency by standardizing on the newer endpoint patterns, and optimize performance by reducing redundant database connections.
## Additional Improvements
1. **Further Database Connection Optimizations**:
- Extended the use of the optimized database connection utility to additional endpoints in `ai-validation.js`
- Updated the `/validate` endpoint and `/test-taxonomy` endpoint to use `getDbConnection`
- Ensured consistent connection management across all routes that access the production database
2. **AI Prompts Data Integrity Verification**:
- Confirmed proper uniqueness constraints are in place in the database schema for AI prompts
- The schema includes:
- `unique_company_prompt` constraint ensuring only one prompt per company
- `idx_unique_general_prompt` index ensuring only one general prompt in the system
- `idx_unique_system_prompt` index ensuring only one system prompt in the system
- Endpoint handlers properly handle uniqueness constraint violations with appropriate 409 Conflict responses
- Validation ensures company-specific prompts have company IDs, while general/system prompts do not
3. **AI Prompts Endpoint Consolidation**:
- Added a new consolidated `/by-type` endpoint that handles all types of prompts (general, system, company_specific)
- Marked the existing separate endpoints as deprecated with console warnings
- Maintained backward compatibility while providing a cleaner API moving forward
## Completed Items
✅ Removed obsolete legacy endpoints in `config.js`
✅ Removed MySQL syntax `/forecast` endpoint in `analytics.js`
✅ Fixed `GET /products/brands` endpoint filtering logic
✅ Created reusable database connection utility (`dbConnection.js`)
✅ Optimized database connections in `ai-validation.js`
✅ Verified data integrity in AI prompts handling
✅ Consolidated AI prompts endpoints with a unified `/by-type` endpoint
## Remaining Items
- Consider adding additional error handling and logging for database connections
- Perform load testing on the optimized database connections to ensure they handle high traffic properly

View File

@@ -150,7 +150,7 @@ CREATE TABLE IF NOT EXISTS calculate_history (
); );
CREATE TABLE IF NOT EXISTS calculate_status ( CREATE TABLE IF NOT EXISTS calculate_status (
module_name module_name PRIMARY KEY, module_name text PRIMARY KEY,
last_calculation_timestamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP last_calculation_timestamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
); );

View File

@@ -280,7 +280,7 @@ CREATE TABLE public.vendor_metrics (
lifetime_sales INT NOT NULL DEFAULT 0, lifetime_revenue NUMERIC(18, 4) NOT NULL DEFAULT 0.00, lifetime_sales INT NOT NULL DEFAULT 0, lifetime_revenue NUMERIC(18, 4) NOT NULL DEFAULT 0.00,
-- Calculated KPIs (Based on 30d aggregates) -- Calculated KPIs (Based on 30d aggregates)
avg_margin_30d NUMERIC(7, 3) -- (profit / revenue) * 100 avg_margin_30d NUMERIC(14, 4) -- (profit / revenue) * 100
-- Add more KPIs if needed (e.g., avg product value, sell-through rate for vendor) -- Add more KPIs if needed (e.g., avg product value, sell-through rate for vendor)
); );
CREATE INDEX idx_vendor_metrics_active_count ON public.vendor_metrics(active_product_count); CREATE INDEX idx_vendor_metrics_active_count ON public.vendor_metrics(active_product_count);

View File

@@ -213,55 +213,55 @@ SET session_replication_role = 'origin'; -- Re-enable foreign key checks
-- Create views for common calculations -- Create views for common calculations
-- product_sales_trends view moved to metrics-schema.sql -- product_sales_trends view moved to metrics-schema.sql
-- Historical data tables imported from production -- -- Historical data tables imported from production
CREATE TABLE imported_product_current_prices ( -- CREATE TABLE imported_product_current_prices (
price_id BIGSERIAL PRIMARY KEY, -- price_id BIGSERIAL PRIMARY KEY,
pid BIGINT NOT NULL, -- pid BIGINT NOT NULL,
qty_buy SMALLINT NOT NULL, -- qty_buy SMALLINT NOT NULL,
is_min_qty_buy BOOLEAN NOT NULL, -- is_min_qty_buy BOOLEAN NOT NULL,
price_each NUMERIC(10,3) NOT NULL, -- price_each NUMERIC(10,3) NOT NULL,
qty_limit SMALLINT NOT NULL, -- qty_limit SMALLINT NOT NULL,
no_promo BOOLEAN NOT NULL, -- no_promo BOOLEAN NOT NULL,
checkout_offer BOOLEAN NOT NULL, -- checkout_offer BOOLEAN NOT NULL,
active BOOLEAN NOT NULL, -- active BOOLEAN NOT NULL,
date_active TIMESTAMP WITH TIME ZONE, -- date_active TIMESTAMP WITH TIME ZONE,
date_deactive TIMESTAMP WITH TIME ZONE, -- date_deactive TIMESTAMP WITH TIME ZONE,
updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP -- updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
); -- );
CREATE INDEX idx_imported_product_current_prices_pid ON imported_product_current_prices(pid, active, qty_buy); -- CREATE INDEX idx_imported_product_current_prices_pid ON imported_product_current_prices(pid, active, qty_buy);
CREATE INDEX idx_imported_product_current_prices_checkout ON imported_product_current_prices(checkout_offer, active); -- CREATE INDEX idx_imported_product_current_prices_checkout ON imported_product_current_prices(checkout_offer, active);
CREATE INDEX idx_imported_product_current_prices_deactive ON imported_product_current_prices(date_deactive, active); -- CREATE INDEX idx_imported_product_current_prices_deactive ON imported_product_current_prices(date_deactive, active);
CREATE INDEX idx_imported_product_current_prices_active ON imported_product_current_prices(date_active, active); -- CREATE INDEX idx_imported_product_current_prices_active ON imported_product_current_prices(date_active, active);
CREATE TABLE imported_daily_inventory ( -- CREATE TABLE imported_daily_inventory (
date DATE NOT NULL, -- date DATE NOT NULL,
pid BIGINT NOT NULL, -- pid BIGINT NOT NULL,
amountsold SMALLINT NOT NULL DEFAULT 0, -- amountsold SMALLINT NOT NULL DEFAULT 0,
times_sold SMALLINT NOT NULL DEFAULT 0, -- times_sold SMALLINT NOT NULL DEFAULT 0,
qtyreceived SMALLINT NOT NULL DEFAULT 0, -- qtyreceived SMALLINT NOT NULL DEFAULT 0,
price NUMERIC(7,2) NOT NULL DEFAULT 0, -- price NUMERIC(7,2) NOT NULL DEFAULT 0,
costeach NUMERIC(7,2) NOT NULL DEFAULT 0, -- costeach NUMERIC(7,2) NOT NULL DEFAULT 0,
stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP, -- stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP, -- updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (date, pid) -- PRIMARY KEY (date, pid)
); -- );
CREATE INDEX idx_imported_daily_inventory_pid ON imported_daily_inventory(pid); -- CREATE INDEX idx_imported_daily_inventory_pid ON imported_daily_inventory(pid);
CREATE TABLE imported_product_stat_history ( -- CREATE TABLE imported_product_stat_history (
pid BIGINT NOT NULL, -- pid BIGINT NOT NULL,
date DATE NOT NULL, -- date DATE NOT NULL,
score NUMERIC(10,2) NOT NULL, -- score NUMERIC(10,2) NOT NULL,
score2 NUMERIC(10,2) NOT NULL, -- score2 NUMERIC(10,2) NOT NULL,
qty_in_baskets SMALLINT NOT NULL, -- qty_in_baskets SMALLINT NOT NULL,
qty_sold SMALLINT NOT NULL, -- qty_sold SMALLINT NOT NULL,
notifies_set SMALLINT NOT NULL, -- notifies_set SMALLINT NOT NULL,
visibility_score NUMERIC(10,2) NOT NULL, -- visibility_score NUMERIC(10,2) NOT NULL,
health_score VARCHAR(5) NOT NULL, -- health_score VARCHAR(5) NOT NULL,
sold_view_score NUMERIC(6,3) NOT NULL, -- sold_view_score NUMERIC(6,3) NOT NULL,
updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP, -- updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (pid, date) -- PRIMARY KEY (pid, date)
); -- );
CREATE INDEX idx_imported_product_stat_history_date ON imported_product_stat_history(date); -- CREATE INDEX idx_imported_product_stat_history_date ON imported_product_stat_history(date);

View File

@@ -1,7 +1,7 @@
const path = require('path'); const path = require('path');
const fs = require('fs'); const fs = require('fs');
const progress = require('../utils/progress'); // Assuming progress utils are here const progress = require('../scripts/metrics-new/utils/progress'); // Assuming progress utils are here
const { getConnection, closePool } = require('../utils/db'); // Assuming db utils are here const { getConnection, closePool } = require('../scripts/metrics-new/utils/db'); // Assuming db utils are here
const os = require('os'); // For detecting number of CPU cores const os = require('os'); // For detecting number of CPU cores
// --- Configuration --- // --- Configuration ---

View File

@@ -38,7 +38,7 @@ const sshConfig = {
password: process.env.PROD_DB_PASSWORD, password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME, database: process.env.PROD_DB_NAME,
port: process.env.PROD_DB_PORT || 3306, port: process.env.PROD_DB_PORT || 3306,
timezone: 'Z', timezone: '-05:00', // Production DB always stores times in EST (UTC-5) regardless of DST
}, },
localDbConfig: { localDbConfig: {
// PostgreSQL config for local // PostgreSQL config for local

View File

@@ -26,10 +26,7 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
let cumulativeProcessedOrders = 0; let cumulativeProcessedOrders = 0;
try { try {
// Begin transaction // Get last sync info - NOT in a transaction anymore
await localConnection.beginTransaction();
// Get last sync info
const [syncInfo] = await localConnection.query( const [syncInfo] = await localConnection.query(
"SELECT last_sync_timestamp FROM sync_status WHERE table_name = 'orders'" "SELECT last_sync_timestamp FROM sync_status WHERE table_name = 'orders'"
); );
@@ -43,8 +40,8 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
FROM order_items oi FROM order_items oi
JOIN _order o ON oi.order_id = o.order_id JOIN _order o ON oi.order_id = o.order_id
WHERE o.order_status >= 15 WHERE o.order_status >= 15
AND o.date_placed_onlydate >= DATE_SUB(CURRENT_DATE, INTERVAL ${incrementalUpdate ? '1' : '5'} YEAR) AND o.date_placed >= DATE_SUB(CURRENT_DATE, INTERVAL ${incrementalUpdate ? '1' : '5'} YEAR)
AND o.date_placed_onlydate IS NOT NULL AND o.date_placed IS NOT NULL
${incrementalUpdate ? ` ${incrementalUpdate ? `
AND ( AND (
o.stamp > ? o.stamp > ?
@@ -82,8 +79,8 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
FROM order_items oi FROM order_items oi
JOIN _order o ON oi.order_id = o.order_id JOIN _order o ON oi.order_id = o.order_id
WHERE o.order_status >= 15 WHERE o.order_status >= 15
AND o.date_placed_onlydate >= DATE_SUB(CURRENT_DATE, INTERVAL ${incrementalUpdate ? '1' : '5'} YEAR) AND o.date_placed >= DATE_SUB(CURRENT_DATE, INTERVAL ${incrementalUpdate ? '1' : '5'} YEAR)
AND o.date_placed_onlydate IS NOT NULL AND o.date_placed IS NOT NULL
${incrementalUpdate ? ` ${incrementalUpdate ? `
AND ( AND (
o.stamp > ? o.stamp > ?
@@ -107,91 +104,131 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
console.log('Orders: Found', orderItems.length, 'order items to process'); console.log('Orders: Found', orderItems.length, 'order items to process');
// Create tables in PostgreSQL for data processing // Create tables in PostgreSQL for data processing
await localConnection.query(` // Start a transaction just for creating the temp tables
DROP TABLE IF EXISTS temp_order_items; await localConnection.beginTransaction();
DROP TABLE IF EXISTS temp_order_meta; try {
DROP TABLE IF EXISTS temp_order_discounts;
DROP TABLE IF EXISTS temp_order_taxes;
DROP TABLE IF EXISTS temp_order_costs;
CREATE TEMP TABLE temp_order_items (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
sku TEXT NOT NULL,
price NUMERIC(14, 4) NOT NULL,
quantity INTEGER NOT NULL,
base_discount NUMERIC(14, 4) DEFAULT 0,
PRIMARY KEY (order_id, pid)
);
CREATE TEMP TABLE temp_order_meta (
order_id INTEGER NOT NULL,
date TIMESTAMP WITH TIME ZONE NOT NULL,
customer TEXT NOT NULL,
customer_name TEXT NOT NULL,
status TEXT,
canceled BOOLEAN,
summary_discount NUMERIC(14, 4) DEFAULT 0.0000,
summary_subtotal NUMERIC(14, 4) DEFAULT 0.0000,
PRIMARY KEY (order_id)
);
CREATE TEMP TABLE temp_order_discounts (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
discount NUMERIC(14, 4) NOT NULL,
PRIMARY KEY (order_id, pid)
);
CREATE TEMP TABLE temp_order_taxes (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
tax NUMERIC(14, 4) NOT NULL,
PRIMARY KEY (order_id, pid)
);
CREATE TEMP TABLE temp_order_costs (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
costeach NUMERIC(14, 4) DEFAULT 0.0000,
PRIMARY KEY (order_id, pid)
);
CREATE INDEX idx_temp_order_items_pid ON temp_order_items(pid);
CREATE INDEX idx_temp_order_meta_order_id ON temp_order_meta(order_id);
`);
// Insert order items in batches
for (let i = 0; i < orderItems.length; i += 5000) {
const batch = orderItems.slice(i, Math.min(i + 5000, orderItems.length));
const placeholders = batch.map((_, idx) =>
`($${idx * 6 + 1}, $${idx * 6 + 2}, $${idx * 6 + 3}, $${idx * 6 + 4}, $${idx * 6 + 5}, $${idx * 6 + 6})`
).join(",");
const values = batch.flatMap(item => [
item.order_id, item.prod_pid, item.SKU, item.price, item.quantity, item.base_discount
]);
await localConnection.query(` await localConnection.query(`
INSERT INTO temp_order_items (order_id, pid, sku, price, quantity, base_discount) DROP TABLE IF EXISTS temp_order_items;
VALUES ${placeholders} DROP TABLE IF EXISTS temp_order_meta;
ON CONFLICT (order_id, pid) DO UPDATE SET DROP TABLE IF EXISTS temp_order_discounts;
sku = EXCLUDED.sku, DROP TABLE IF EXISTS temp_order_taxes;
price = EXCLUDED.price, DROP TABLE IF EXISTS temp_order_costs;
quantity = EXCLUDED.quantity, DROP TABLE IF EXISTS temp_main_discounts;
base_discount = EXCLUDED.base_discount DROP TABLE IF EXISTS temp_item_discounts;
`, values);
processedCount = i + batch.length; CREATE TEMP TABLE temp_order_items (
outputProgress({ order_id INTEGER NOT NULL,
status: "running", pid INTEGER NOT NULL,
operation: "Orders import", sku TEXT NOT NULL,
message: `Loading order items: ${processedCount} of ${totalOrderItems}`, price NUMERIC(14, 4) NOT NULL,
current: processedCount, quantity INTEGER NOT NULL,
total: totalOrderItems, base_discount NUMERIC(14, 4) DEFAULT 0,
elapsed: formatElapsedTime((Date.now() - startTime) / 1000), PRIMARY KEY (order_id, pid)
remaining: estimateRemaining(startTime, processedCount, totalOrderItems), );
rate: calculateRate(startTime, processedCount)
}); CREATE TEMP TABLE temp_order_meta (
order_id INTEGER NOT NULL,
date TIMESTAMP WITH TIME ZONE NOT NULL,
customer TEXT NOT NULL,
customer_name TEXT NOT NULL,
status TEXT,
canceled BOOLEAN,
summary_discount NUMERIC(14, 4) DEFAULT 0.0000,
summary_subtotal NUMERIC(14, 4) DEFAULT 0.0000,
summary_discount_subtotal NUMERIC(14, 4) DEFAULT 0.0000,
PRIMARY KEY (order_id)
);
CREATE TEMP TABLE temp_order_discounts (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
discount NUMERIC(14, 4) NOT NULL,
PRIMARY KEY (order_id, pid)
);
CREATE TEMP TABLE temp_main_discounts (
order_id INTEGER NOT NULL,
discount_id INTEGER NOT NULL,
discount_amount_subtotal NUMERIC(14, 4) DEFAULT 0.0000,
PRIMARY KEY (order_id, discount_id)
);
CREATE TEMP TABLE temp_item_discounts (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
discount_id INTEGER NOT NULL,
amount NUMERIC(14, 4) NOT NULL,
PRIMARY KEY (order_id, pid, discount_id)
);
CREATE TEMP TABLE temp_order_taxes (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
tax NUMERIC(14, 4) NOT NULL,
PRIMARY KEY (order_id, pid)
);
CREATE TEMP TABLE temp_order_costs (
order_id INTEGER NOT NULL,
pid INTEGER NOT NULL,
costeach NUMERIC(14, 4) DEFAULT 0.0000,
PRIMARY KEY (order_id, pid)
);
CREATE INDEX idx_temp_order_items_pid ON temp_order_items(pid);
CREATE INDEX idx_temp_order_meta_order_id ON temp_order_meta(order_id);
CREATE INDEX idx_temp_order_discounts_order_pid ON temp_order_discounts(order_id, pid);
CREATE INDEX idx_temp_order_taxes_order_pid ON temp_order_taxes(order_id, pid);
CREATE INDEX idx_temp_order_costs_order_pid ON temp_order_costs(order_id, pid);
CREATE INDEX idx_temp_main_discounts_discount_id ON temp_main_discounts(discount_id);
CREATE INDEX idx_temp_item_discounts_order_pid ON temp_item_discounts(order_id, pid);
CREATE INDEX idx_temp_item_discounts_discount_id ON temp_item_discounts(discount_id);
`);
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
}
// Insert order items in batches - each batch gets its own transaction
for (let i = 0; i < orderItems.length; i += 5000) {
await localConnection.beginTransaction();
try {
const batch = orderItems.slice(i, Math.min(i + 5000, orderItems.length));
const placeholders = batch.map((_, idx) =>
`($${idx * 6 + 1}, $${idx * 6 + 2}, $${idx * 6 + 3}, $${idx * 6 + 4}, $${idx * 6 + 5}, $${idx * 6 + 6})`
).join(",");
const values = batch.flatMap(item => [
item.order_id, item.prod_pid, item.SKU, item.price, item.quantity, item.base_discount
]);
await localConnection.query(`
INSERT INTO temp_order_items (order_id, pid, sku, price, quantity, base_discount)
VALUES ${placeholders}
ON CONFLICT (order_id, pid) DO UPDATE SET
sku = EXCLUDED.sku,
price = EXCLUDED.price,
quantity = EXCLUDED.quantity,
base_discount = EXCLUDED.base_discount
`, values);
await localConnection.commit();
processedCount = i + batch.length;
outputProgress({
status: "running",
operation: "Orders import",
message: `Loading order items: ${processedCount} of ${totalOrderItems}`,
current: processedCount,
total: totalOrderItems,
elapsed: formatElapsedTime((Date.now() - startTime) / 1000),
remaining: estimateRemaining(startTime, processedCount, totalOrderItems),
rate: calculateRate(startTime, processedCount)
});
} catch (error) {
await localConnection.rollback();
throw error;
}
} }
// Get unique order IDs // Get unique order IDs
@@ -218,86 +255,162 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
const [orders] = await prodConnection.query(` const [orders] = await prodConnection.query(`
SELECT SELECT
o.order_id, o.order_id,
o.date_placed_onlydate as date, o.date_placed as date,
o.order_cid as customer, o.order_cid as customer,
CONCAT(COALESCE(u.firstname, ''), ' ', COALESCE(u.lastname, '')) as customer_name, CONCAT(COALESCE(u.firstname, ''), ' ', COALESCE(u.lastname, '')) as customer_name,
o.order_status as status, o.order_status as status,
CASE WHEN o.date_cancelled != '0000-00-00 00:00:00' THEN 1 ELSE 0 END as canceled, CASE WHEN o.date_cancelled != '0000-00-00 00:00:00' THEN 1 ELSE 0 END as canceled,
o.summary_discount, o.summary_discount,
o.summary_subtotal o.summary_subtotal,
o.summary_discount_subtotal
FROM _order o FROM _order o
LEFT JOIN users u ON o.order_cid = u.cid LEFT JOIN users u ON o.order_cid = u.cid
WHERE o.order_id IN (?) WHERE o.order_id IN (?)
`, [batchIds]); `, [batchIds]);
// Process in sub-batches for PostgreSQL // Process in sub-batches for PostgreSQL
for (let j = 0; j < orders.length; j += PG_BATCH_SIZE) { await localConnection.beginTransaction();
const subBatch = orders.slice(j, j + PG_BATCH_SIZE); try {
if (subBatch.length === 0) continue; for (let j = 0; j < orders.length; j += PG_BATCH_SIZE) {
const subBatch = orders.slice(j, j + PG_BATCH_SIZE);
if (subBatch.length === 0) continue;
const placeholders = subBatch.map((_, idx) => const placeholders = subBatch.map((_, idx) =>
`($${idx * 8 + 1}, $${idx * 8 + 2}, $${idx * 8 + 3}, $${idx * 8 + 4}, $${idx * 8 + 5}, $${idx * 8 + 6}, $${idx * 8 + 7}, $${idx * 8 + 8})` `($${idx * 9 + 1}, $${idx * 9 + 2}, $${idx * 9 + 3}, $${idx * 9 + 4}, $${idx * 9 + 5}, $${idx * 9 + 6}, $${idx * 9 + 7}, $${idx * 9 + 8}, $${idx * 9 + 9})`
).join(","); ).join(",");
const values = subBatch.flatMap(order => [ const values = subBatch.flatMap(order => [
order.order_id, order.order_id,
new Date(order.date), // Convert to TIMESTAMP WITH TIME ZONE new Date(order.date), // Convert to TIMESTAMP WITH TIME ZONE
order.customer, order.customer,
toTitleCase(order.customer_name) || '', toTitleCase(order.customer_name) || '',
order.status.toString(), // Convert status to TEXT order.status.toString(), // Convert status to TEXT
order.canceled, order.canceled,
order.summary_discount || 0, order.summary_discount || 0,
order.summary_subtotal || 0 order.summary_subtotal || 0,
]); order.summary_discount_subtotal || 0
]);
await localConnection.query(` await localConnection.query(`
INSERT INTO temp_order_meta ( INSERT INTO temp_order_meta (
order_id, date, customer, customer_name, status, canceled, order_id, date, customer, customer_name, status, canceled,
summary_discount, summary_subtotal summary_discount, summary_subtotal, summary_discount_subtotal
) )
VALUES ${placeholders} VALUES ${placeholders}
ON CONFLICT (order_id) DO UPDATE SET ON CONFLICT (order_id) DO UPDATE SET
date = EXCLUDED.date, date = EXCLUDED.date,
customer = EXCLUDED.customer, customer = EXCLUDED.customer,
customer_name = EXCLUDED.customer_name, customer_name = EXCLUDED.customer_name,
status = EXCLUDED.status, status = EXCLUDED.status,
canceled = EXCLUDED.canceled, canceled = EXCLUDED.canceled,
summary_discount = EXCLUDED.summary_discount, summary_discount = EXCLUDED.summary_discount,
summary_subtotal = EXCLUDED.summary_subtotal summary_subtotal = EXCLUDED.summary_subtotal,
`, values); summary_discount_subtotal = EXCLUDED.summary_discount_subtotal
`, values);
}
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
} }
}; };
const processDiscountsBatch = async (batchIds) => { const processDiscountsBatch = async (batchIds) => {
// First, load main discount records
const [mainDiscounts] = await prodConnection.query(`
SELECT order_id, discount_id, discount_amount_subtotal
FROM order_discounts
WHERE order_id IN (?)
`, [batchIds]);
if (mainDiscounts.length > 0) {
await localConnection.beginTransaction();
try {
for (let j = 0; j < mainDiscounts.length; j += PG_BATCH_SIZE) {
const subBatch = mainDiscounts.slice(j, j + PG_BATCH_SIZE);
if (subBatch.length === 0) continue;
const placeholders = subBatch.map((_, idx) =>
`($${idx * 3 + 1}, $${idx * 3 + 2}, $${idx * 3 + 3})`
).join(",");
const values = subBatch.flatMap(d => [
d.order_id,
d.discount_id,
d.discount_amount_subtotal || 0
]);
await localConnection.query(`
INSERT INTO temp_main_discounts (order_id, discount_id, discount_amount_subtotal)
VALUES ${placeholders}
ON CONFLICT (order_id, discount_id) DO UPDATE SET
discount_amount_subtotal = EXCLUDED.discount_amount_subtotal
`, values);
}
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
}
}
// Then, load item discount records
const [discounts] = await prodConnection.query(` const [discounts] = await prodConnection.query(`
SELECT order_id, pid, SUM(amount) as discount SELECT order_id, pid, discount_id, amount
FROM order_discount_items FROM order_discount_items
WHERE order_id IN (?) WHERE order_id IN (?)
GROUP BY order_id, pid
`, [batchIds]); `, [batchIds]);
if (discounts.length === 0) return; if (discounts.length === 0) return;
for (let j = 0; j < discounts.length; j += PG_BATCH_SIZE) { // Process in memory to handle potential duplicates
const subBatch = discounts.slice(j, j + PG_BATCH_SIZE); const discountMap = new Map();
if (subBatch.length === 0) continue; for (const d of discounts) {
const key = `${d.order_id}-${d.pid}-${d.discount_id}`;
discountMap.set(key, d);
}
const placeholders = subBatch.map((_, idx) => const uniqueDiscounts = Array.from(discountMap.values());
`($${idx * 3 + 1}, $${idx * 3 + 2}, $${idx * 3 + 3})`
).join(",");
const values = subBatch.flatMap(d => [
d.order_id,
d.pid,
d.discount || 0
]);
await localConnection.beginTransaction();
try {
for (let j = 0; j < uniqueDiscounts.length; j += PG_BATCH_SIZE) {
const subBatch = uniqueDiscounts.slice(j, j + PG_BATCH_SIZE);
if (subBatch.length === 0) continue;
const placeholders = subBatch.map((_, idx) =>
`($${idx * 4 + 1}, $${idx * 4 + 2}, $${idx * 4 + 3}, $${idx * 4 + 4})`
).join(",");
const values = subBatch.flatMap(d => [
d.order_id,
d.pid,
d.discount_id,
d.amount || 0
]);
await localConnection.query(`
INSERT INTO temp_item_discounts (order_id, pid, discount_id, amount)
VALUES ${placeholders}
ON CONFLICT (order_id, pid, discount_id) DO UPDATE SET
amount = EXCLUDED.amount
`, values);
}
// Create aggregated view with a simpler, safer query that avoids duplicates
await localConnection.query(` await localConnection.query(`
TRUNCATE temp_order_discounts;
INSERT INTO temp_order_discounts (order_id, pid, discount) INSERT INTO temp_order_discounts (order_id, pid, discount)
VALUES ${placeholders} SELECT order_id, pid, SUM(amount) as discount
ON CONFLICT (order_id, pid) DO UPDATE SET FROM temp_item_discounts
discount = EXCLUDED.discount GROUP BY order_id, pid
`, values); `);
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
} }
}; };
@@ -318,26 +431,33 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
if (taxes.length === 0) return; if (taxes.length === 0) return;
for (let j = 0; j < taxes.length; j += PG_BATCH_SIZE) { await localConnection.beginTransaction();
const subBatch = taxes.slice(j, j + PG_BATCH_SIZE); try {
if (subBatch.length === 0) continue; for (let j = 0; j < taxes.length; j += PG_BATCH_SIZE) {
const subBatch = taxes.slice(j, j + PG_BATCH_SIZE);
if (subBatch.length === 0) continue;
const placeholders = subBatch.map((_, idx) => const placeholders = subBatch.map((_, idx) =>
`($${idx * 3 + 1}, $${idx * 3 + 2}, $${idx * 3 + 3})` `($${idx * 3 + 1}, $${idx * 3 + 2}, $${idx * 3 + 3})`
).join(","); ).join(",");
const values = subBatch.flatMap(t => [ const values = subBatch.flatMap(t => [
t.order_id, t.order_id,
t.pid, t.pid,
t.tax || 0 t.tax || 0
]); ]);
await localConnection.query(` await localConnection.query(`
INSERT INTO temp_order_taxes (order_id, pid, tax) INSERT INTO temp_order_taxes (order_id, pid, tax)
VALUES ${placeholders} VALUES ${placeholders}
ON CONFLICT (order_id, pid) DO UPDATE SET ON CONFLICT (order_id, pid) DO UPDATE SET
tax = EXCLUDED.tax tax = EXCLUDED.tax
`, values); `, values);
}
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
} }
}; };
@@ -363,39 +483,45 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
if (costs.length === 0) return; if (costs.length === 0) return;
for (let j = 0; j < costs.length; j += PG_BATCH_SIZE) { await localConnection.beginTransaction();
const subBatch = costs.slice(j, j + PG_BATCH_SIZE); try {
if (subBatch.length === 0) continue; for (let j = 0; j < costs.length; j += PG_BATCH_SIZE) {
const subBatch = costs.slice(j, j + PG_BATCH_SIZE);
if (subBatch.length === 0) continue;
const placeholders = subBatch.map((_, idx) => const placeholders = subBatch.map((_, idx) =>
`($${idx * 3 + 1}, $${idx * 3 + 2}, $${idx * 3 + 3})` `($${idx * 3 + 1}, $${idx * 3 + 2}, $${idx * 3 + 3})`
).join(","); ).join(",");
const values = subBatch.flatMap(c => [ const values = subBatch.flatMap(c => [
c.order_id, c.order_id,
c.pid, c.pid,
c.costeach || 0 c.costeach || 0
]); ]);
await localConnection.query(` await localConnection.query(`
INSERT INTO temp_order_costs (order_id, pid, costeach) INSERT INTO temp_order_costs (order_id, pid, costeach)
VALUES ${placeholders} VALUES ${placeholders}
ON CONFLICT (order_id, pid) DO UPDATE SET ON CONFLICT (order_id, pid) DO UPDATE SET
costeach = EXCLUDED.costeach costeach = EXCLUDED.costeach
`, values); `, values);
}
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
} }
}; };
// Process all data types in parallel for each batch // Process all data types SEQUENTIALLY for each batch - not in parallel
for (let i = 0; i < orderIds.length; i += METADATA_BATCH_SIZE) { for (let i = 0; i < orderIds.length; i += METADATA_BATCH_SIZE) {
const batchIds = orderIds.slice(i, i + METADATA_BATCH_SIZE); const batchIds = orderIds.slice(i, i + METADATA_BATCH_SIZE);
await Promise.all([ // Run these sequentially instead of in parallel to avoid transaction conflicts
processMetadataBatch(batchIds), await processMetadataBatch(batchIds);
processDiscountsBatch(batchIds), await processDiscountsBatch(batchIds);
processTaxesBatch(batchIds), await processTaxesBatch(batchIds);
processCostsBatch(batchIds) await processCostsBatch(batchIds);
]);
processedCount = i + batchIds.length; processedCount = i + batchIds.length;
outputProgress({ outputProgress({
@@ -422,175 +548,201 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
const existingPids = new Set(existingProducts.rows.map(p => p.pid)); const existingPids = new Set(existingProducts.rows.map(p => p.pid));
// Process in smaller batches // Process in smaller batches
for (let i = 0; i < orderIds.length; i += 1000) { for (let i = 0; i < orderIds.length; i += 2000) { // Increased from 1000 to 2000
const batchIds = orderIds.slice(i, i + 1000); const batchIds = orderIds.slice(i, i + 2000);
// Get combined data for this batch in sub-batches // Get combined data for this batch in sub-batches
const PG_BATCH_SIZE = 100; // Process 100 records at a time const PG_BATCH_SIZE = 200; // Increased from 100 to 200
for (let j = 0; j < batchIds.length; j += PG_BATCH_SIZE) { for (let j = 0; j < batchIds.length; j += PG_BATCH_SIZE) {
const subBatchIds = batchIds.slice(j, j + PG_BATCH_SIZE); const subBatchIds = batchIds.slice(j, j + PG_BATCH_SIZE);
const [orders] = await localConnection.query(` // Start a transaction for this sub-batch
WITH order_totals AS ( await localConnection.beginTransaction();
SELECT try {
oi.order_id, const [orders] = await localConnection.query(`
oi.pid, WITH order_totals AS (
SUM(COALESCE(od.discount, 0)) as promo_discount, SELECT
COALESCE(ot.tax, 0) as total_tax, oi.order_id,
COALESCE(oc.costeach, oi.price * 0.5) as costeach oi.pid,
FROM temp_order_items oi -- Instead of using ARRAY_AGG which can cause duplicate issues, use SUM with a CASE
LEFT JOIN temp_order_discounts od ON oi.order_id = od.order_id AND oi.pid = od.pid SUM(CASE
LEFT JOIN temp_order_taxes ot ON oi.order_id = ot.order_id AND oi.pid = ot.pid WHEN COALESCE(md.discount_amount_subtotal, 0) > 0 THEN id.amount
LEFT JOIN temp_order_costs oc ON oi.order_id = oc.order_id AND oi.pid = oc.pid ELSE 0
GROUP BY oi.order_id, oi.pid, ot.tax, oc.costeach END) as promo_discount_sum,
) COALESCE(ot.tax, 0) as total_tax,
SELECT COALESCE(oc.costeach, oi.price * 0.5) as costeach
oi.order_id as order_number, FROM temp_order_items oi
oi.pid::bigint as pid, LEFT JOIN temp_item_discounts id ON oi.order_id = id.order_id AND oi.pid = id.pid
oi.sku, LEFT JOIN temp_main_discounts md ON id.order_id = md.order_id AND id.discount_id = md.discount_id
om.date, LEFT JOIN temp_order_taxes ot ON oi.order_id = ot.order_id AND oi.pid = ot.pid
oi.price, LEFT JOIN temp_order_costs oc ON oi.order_id = oc.order_id AND oi.pid = oc.pid
oi.quantity, WHERE oi.order_id = ANY($1)
(oi.base_discount + GROUP BY oi.order_id, oi.pid, ot.tax, oc.costeach
COALESCE(ot.promo_discount, 0) +
CASE
WHEN om.summary_discount > 0 AND om.summary_subtotal > 0 THEN
ROUND((om.summary_discount * (oi.price * oi.quantity)) / NULLIF(om.summary_subtotal, 0), 2)
ELSE 0
END)::NUMERIC(14, 4) as discount,
COALESCE(ot.total_tax, 0)::NUMERIC(14, 4) as tax,
false as tax_included,
0 as shipping,
om.customer,
om.customer_name,
om.status,
om.canceled,
COALESCE(ot.costeach, oi.price * 0.5)::NUMERIC(14, 4) as costeach
FROM (
SELECT DISTINCT ON (order_id, pid)
order_id, pid, sku, price, quantity, base_discount
FROM temp_order_items
WHERE order_id = ANY($1)
ORDER BY order_id, pid
) oi
JOIN temp_order_meta om ON oi.order_id = om.order_id
LEFT JOIN order_totals ot ON oi.order_id = ot.order_id AND oi.pid = ot.pid
ORDER BY oi.order_id, oi.pid
`, [subBatchIds]);
// Filter orders and track missing products
const validOrders = [];
const processedOrderItems = new Set();
const processedOrders = new Set();
for (const order of orders.rows) {
if (!existingPids.has(order.pid)) {
missingProducts.add(order.pid);
skippedOrders.add(order.order_number);
continue;
}
validOrders.push(order);
processedOrderItems.add(`${order.order_number}-${order.pid}`);
processedOrders.add(order.order_number);
}
// Process valid orders in smaller sub-batches
const FINAL_BATCH_SIZE = 50;
for (let k = 0; k < validOrders.length; k += FINAL_BATCH_SIZE) {
const subBatch = validOrders.slice(k, k + FINAL_BATCH_SIZE);
const placeholders = subBatch.map((_, idx) => {
const base = idx * 15; // 15 columns including costeach
return `($${base + 1}, $${base + 2}, $${base + 3}, $${base + 4}, $${base + 5}, $${base + 6}, $${base + 7}, $${base + 8}, $${base + 9}, $${base + 10}, $${base + 11}, $${base + 12}, $${base + 13}, $${base + 14}, $${base + 15})`;
}).join(',');
const batchValues = subBatch.flatMap(o => [
o.order_number,
o.pid,
o.sku || 'NO-SKU',
o.date, // This is now a TIMESTAMP WITH TIME ZONE
o.price,
o.quantity,
o.discount,
o.tax,
o.tax_included,
o.shipping,
o.customer,
o.customer_name,
o.status.toString(), // Convert status to TEXT
o.canceled,
o.costeach
]);
const [result] = await localConnection.query(`
WITH inserted_orders AS (
INSERT INTO orders (
order_number, pid, sku, date, price, quantity, discount,
tax, tax_included, shipping, customer, customer_name,
status, canceled, costeach
)
VALUES ${placeholders}
ON CONFLICT (order_number, pid) DO UPDATE SET
sku = EXCLUDED.sku,
date = EXCLUDED.date,
price = EXCLUDED.price,
quantity = EXCLUDED.quantity,
discount = EXCLUDED.discount,
tax = EXCLUDED.tax,
tax_included = EXCLUDED.tax_included,
shipping = EXCLUDED.shipping,
customer = EXCLUDED.customer,
customer_name = EXCLUDED.customer_name,
status = EXCLUDED.status,
canceled = EXCLUDED.canceled,
costeach = EXCLUDED.costeach
RETURNING xmax = 0 as inserted
) )
SELECT SELECT
COUNT(*) FILTER (WHERE inserted) as inserted, oi.order_id as order_number,
COUNT(*) FILTER (WHERE NOT inserted) as updated oi.pid::bigint as pid,
FROM inserted_orders oi.sku,
`, batchValues); om.date,
oi.price,
const { inserted, updated } = result.rows[0]; oi.quantity,
recordsAdded += parseInt(inserted) || 0; (
recordsUpdated += parseInt(updated) || 0; -- Part 1: Sale Savings for the Line
importedCount += subBatch.length; (oi.base_discount * oi.quantity)
} +
-- Part 2: Prorated Points Discount (if applicable)
CASE
WHEN om.summary_discount_subtotal > 0 AND om.summary_subtotal > 0 THEN
COALESCE(ROUND((om.summary_discount_subtotal * (oi.price * oi.quantity)) / NULLIF(om.summary_subtotal, 0), 4), 0)
ELSE 0
END
+
-- Part 3: Specific Item-Level Discount (only if parent discount affected subtotal)
COALESCE(ot.promo_discount_sum, 0)
)::NUMERIC(14, 4) as discount,
COALESCE(ot.total_tax, 0)::NUMERIC(14, 4) as tax,
false as tax_included,
0 as shipping,
om.customer,
om.customer_name,
om.status,
om.canceled,
COALESCE(ot.costeach, oi.price * 0.5)::NUMERIC(14, 4) as costeach
FROM temp_order_items oi
JOIN temp_order_meta om ON oi.order_id = om.order_id
LEFT JOIN order_totals ot ON oi.order_id = ot.order_id AND oi.pid = ot.pid
WHERE oi.order_id = ANY($1)
ORDER BY oi.order_id, oi.pid
`, [subBatchIds]);
cumulativeProcessedOrders += processedOrders.size; // Filter orders and track missing products
outputProgress({ const validOrders = [];
status: "running", const processedOrderItems = new Set();
operation: "Orders import", const processedOrders = new Set();
message: `Importing orders: ${cumulativeProcessedOrders} of ${totalUniqueOrders}`,
current: cumulativeProcessedOrders, for (const order of orders.rows) {
total: totalUniqueOrders, if (!existingPids.has(order.pid)) {
elapsed: formatElapsedTime((Date.now() - startTime) / 1000), missingProducts.add(order.pid);
remaining: estimateRemaining(startTime, cumulativeProcessedOrders, totalUniqueOrders), skippedOrders.add(order.order_number);
rate: calculateRate(startTime, cumulativeProcessedOrders) continue;
}); }
validOrders.push(order);
processedOrderItems.add(`${order.order_number}-${order.pid}`);
processedOrders.add(order.order_number);
}
// Process valid orders in smaller sub-batches
const FINAL_BATCH_SIZE = 100; // Increased from 50 to 100
for (let k = 0; k < validOrders.length; k += FINAL_BATCH_SIZE) {
const subBatch = validOrders.slice(k, k + FINAL_BATCH_SIZE);
const placeholders = subBatch.map((_, idx) => {
const base = idx * 15; // 15 columns including costeach
return `($${base + 1}, $${base + 2}, $${base + 3}, $${base + 4}, $${base + 5}, $${base + 6}, $${base + 7}, $${base + 8}, $${base + 9}, $${base + 10}, $${base + 11}, $${base + 12}, $${base + 13}, $${base + 14}, $${base + 15})`;
}).join(',');
const batchValues = subBatch.flatMap(o => [
o.order_number,
o.pid,
o.sku || 'NO-SKU',
o.date, // This is now a TIMESTAMP WITH TIME ZONE
o.price,
o.quantity,
o.discount,
o.tax,
o.tax_included,
o.shipping,
o.customer,
o.customer_name,
o.status.toString(), // Convert status to TEXT
o.canceled,
o.costeach
]);
const [result] = await localConnection.query(`
WITH inserted_orders AS (
INSERT INTO orders (
order_number, pid, sku, date, price, quantity, discount,
tax, tax_included, shipping, customer, customer_name,
status, canceled, costeach
)
VALUES ${placeholders}
ON CONFLICT (order_number, pid) DO UPDATE SET
sku = EXCLUDED.sku,
date = EXCLUDED.date,
price = EXCLUDED.price,
quantity = EXCLUDED.quantity,
discount = EXCLUDED.discount,
tax = EXCLUDED.tax,
tax_included = EXCLUDED.tax_included,
shipping = EXCLUDED.shipping,
customer = EXCLUDED.customer,
customer_name = EXCLUDED.customer_name,
status = EXCLUDED.status,
canceled = EXCLUDED.canceled,
costeach = EXCLUDED.costeach
RETURNING xmax = 0 as inserted
)
SELECT
COUNT(*) FILTER (WHERE inserted) as inserted,
COUNT(*) FILTER (WHERE NOT inserted) as updated
FROM inserted_orders
`, batchValues);
const { inserted, updated } = result.rows[0];
recordsAdded += parseInt(inserted) || 0;
recordsUpdated += parseInt(updated) || 0;
importedCount += subBatch.length;
}
await localConnection.commit();
cumulativeProcessedOrders += processedOrders.size;
outputProgress({
status: "running",
operation: "Orders import",
message: `Importing orders: ${cumulativeProcessedOrders} of ${totalUniqueOrders}`,
current: cumulativeProcessedOrders,
total: totalUniqueOrders,
elapsed: formatElapsedTime((Date.now() - startTime) / 1000),
remaining: estimateRemaining(startTime, cumulativeProcessedOrders, totalUniqueOrders),
rate: calculateRate(startTime, cumulativeProcessedOrders)
});
} catch (error) {
await localConnection.rollback();
throw error;
}
} }
} }
// Update sync status // Start a transaction for updating sync status and dropping temp tables
await localConnection.query(` await localConnection.beginTransaction();
INSERT INTO sync_status (table_name, last_sync_timestamp) try {
VALUES ('orders', NOW()) // Update sync status
ON CONFLICT (table_name) DO UPDATE SET await localConnection.query(`
last_sync_timestamp = NOW() INSERT INTO sync_status (table_name, last_sync_timestamp)
`); VALUES ('orders', NOW())
ON CONFLICT (table_name) DO UPDATE SET
// Cleanup temporary tables last_sync_timestamp = NOW()
await localConnection.query(` `);
DROP TABLE IF EXISTS temp_order_items;
DROP TABLE IF EXISTS temp_order_meta; // Cleanup temporary tables
DROP TABLE IF EXISTS temp_order_discounts; await localConnection.query(`
DROP TABLE IF EXISTS temp_order_taxes; DROP TABLE IF EXISTS temp_order_items;
DROP TABLE IF EXISTS temp_order_costs; DROP TABLE IF EXISTS temp_order_meta;
`); DROP TABLE IF EXISTS temp_order_discounts;
DROP TABLE IF EXISTS temp_order_taxes;
// Commit transaction DROP TABLE IF EXISTS temp_order_costs;
await localConnection.commit(); DROP TABLE IF EXISTS temp_main_discounts;
DROP TABLE IF EXISTS temp_item_discounts;
`);
// Commit final transaction
await localConnection.commit();
} catch (error) {
await localConnection.rollback();
throw error;
}
return { return {
status: "complete", status: "complete",
@@ -604,16 +756,8 @@ async function importOrders(prodConnection, localConnection, incrementalUpdate =
}; };
} catch (error) { } catch (error) {
console.error("Error during orders import:", error); console.error("Error during orders import:", error);
// Rollback transaction
try {
await localConnection.rollback();
} catch (rollbackError) {
console.error("Error during rollback:", rollbackError);
}
throw error; throw error;
} }
} }
module.exports = importOrders; module.exports = importOrders;

View File

@@ -8,29 +8,7 @@ dotenv.config({ path: path.join(__dirname, "../../.env") });
// Utility functions // Utility functions
const imageUrlBase = process.env.PRODUCT_IMAGE_URL_BASE || 'https://sbing.com/i/products/0000/'; const imageUrlBase = process.env.PRODUCT_IMAGE_URL_BASE || 'https://sbing.com/i/products/0000/';
const getImageUrls = (pid, iid = 1) => {
// Modified to accept a db connection for querying product_images
const getImageUrls = async (pid, prodConnection, iid = null) => {
// If iid isn't provided, try to get it from product_images
if (iid === null && prodConnection) {
try {
// Query for images with order=255 (default/primary images)
const [primaryImages] = await prodConnection.query(
'SELECT iid FROM product_images WHERE pid = ? AND `order` = 255 LIMIT 1',
[pid]
);
// Use the found iid or default to 1
iid = primaryImages.length > 0 ? primaryImages[0].iid : 1;
} catch (error) {
console.error(`Error fetching primary image for pid ${pid}:`, error);
iid = 1; // Fallback to default
}
} else {
// Use default if connection not provided
iid = iid || 1;
}
const paddedPid = pid.toString().padStart(6, '0'); const paddedPid = pid.toString().padStart(6, '0');
// Use padded PID only for the first 3 digits // Use padded PID only for the first 3 digits
const prefix = paddedPid.slice(0, 3); const prefix = paddedPid.slice(0, 3);
@@ -120,6 +98,7 @@ async function setupTemporaryTables(connection) {
baskets INTEGER, baskets INTEGER,
notifies INTEGER, notifies INTEGER,
date_last_sold TIMESTAMP WITH TIME ZONE, date_last_sold TIMESTAMP WITH TIME ZONE,
primary_iid INTEGER,
image TEXT, image TEXT,
image_175 TEXT, image_175 TEXT,
image_full TEXT, image_full TEXT,
@@ -215,8 +194,12 @@ async function importMissingProducts(prodConnection, localConnection, missingPid
p.country_of_origin, p.country_of_origin,
(SELECT COUNT(*) FROM mybasket mb WHERE mb.item = p.pid AND mb.qty > 0) AS baskets, (SELECT COUNT(*) FROM mybasket mb WHERE mb.item = p.pid AND mb.qty > 0) AS baskets,
(SELECT COUNT(*) FROM product_notify pn WHERE pn.pid = p.pid) AS notifies, (SELECT COUNT(*) FROM product_notify pn WHERE pn.pid = p.pid) AS notifies,
(SELECT COALESCE(SUM(oi.qty_ordered), 0) FROM order_items oi WHERE oi.prod_pid = p.pid) AS total_sold, (SELECT COALESCE(SUM(oi.qty_ordered), 0)
FROM order_items oi
JOIN _order o ON oi.order_id = o.order_id
WHERE oi.prod_pid = p.pid AND o.order_status >= 20) AS total_sold,
pls.date_sold as date_last_sold, pls.date_sold as date_last_sold,
(SELECT iid FROM product_images WHERE pid = p.pid AND \`order\` = 255 LIMIT 1) AS primary_iid,
GROUP_CONCAT(DISTINCT CASE GROUP_CONCAT(DISTINCT CASE
WHEN pc.cat_id IS NOT NULL WHEN pc.cat_id IS NOT NULL
AND pc.type IN (10, 20, 11, 21, 12, 13) AND pc.type IN (10, 20, 11, 21, 12, 13)
@@ -255,15 +238,13 @@ async function importMissingProducts(prodConnection, localConnection, missingPid
const batch = prodData.slice(i, i + BATCH_SIZE); const batch = prodData.slice(i, i + BATCH_SIZE);
const placeholders = batch.map((_, idx) => { const placeholders = batch.map((_, idx) => {
const base = idx * 47; // 47 columns const base = idx * 48; // 48 columns
return `(${Array.from({ length: 47 }, (_, i) => `$${base + i + 1}`).join(', ')})`; return `(${Array.from({ length: 48 }, (_, i) => `$${base + i + 1}`).join(', ')})`;
}).join(','); }).join(',');
// Process image URLs for the batch const values = batch.flatMap(row => {
const processedValues = []; const imageUrls = getImageUrls(row.pid, row.primary_iid || 1);
for (const row of batch) { return [
const imageUrls = await getImageUrls(row.pid, prodConnection);
processedValues.push([
row.pid, row.pid,
row.title, row.title,
row.description, row.description,
@@ -306,15 +287,14 @@ async function importMissingProducts(prodConnection, localConnection, missingPid
row.baskets, row.baskets,
row.notifies, row.notifies,
validateDate(row.date_last_sold), validateDate(row.date_last_sold),
row.primary_iid,
imageUrls.image, imageUrls.image,
imageUrls.image_175, imageUrls.image_175,
imageUrls.image_full, imageUrls.image_full,
null, null,
null null
]); ];
} });
const values = processedValues.flat();
const [result] = await localConnection.query(` const [result] = await localConnection.query(`
WITH inserted_products AS ( WITH inserted_products AS (
@@ -325,7 +305,7 @@ async function importMissingProducts(prodConnection, localConnection, missingPid
landing_cost_price, barcode, harmonized_tariff_code, updated_at, visible, landing_cost_price, barcode, harmonized_tariff_code, updated_at, visible,
managing_stock, replenishable, permalink, moq, uom, rating, reviews, managing_stock, replenishable, permalink, moq, uom, rating, reviews,
weight, length, width, height, country_of_origin, location, total_sold, weight, length, width, height, country_of_origin, location, total_sold,
baskets, notifies, date_last_sold, image, image_175, image_full, options, tags baskets, notifies, date_last_sold, primary_iid, image, image_175, image_full, options, tags
) )
VALUES ${placeholders} VALUES ${placeholders}
ON CONFLICT (pid) DO NOTHING ON CONFLICT (pid) DO NOTHING
@@ -420,8 +400,12 @@ async function materializeCalculations(prodConnection, localConnection, incremen
p.country_of_origin, p.country_of_origin,
(SELECT COUNT(*) FROM mybasket mb WHERE mb.item = p.pid AND mb.qty > 0) AS baskets, (SELECT COUNT(*) FROM mybasket mb WHERE mb.item = p.pid AND mb.qty > 0) AS baskets,
(SELECT COUNT(*) FROM product_notify pn WHERE pn.pid = p.pid) AS notifies, (SELECT COUNT(*) FROM product_notify pn WHERE pn.pid = p.pid) AS notifies,
(SELECT COALESCE(SUM(oi.qty_ordered), 0) FROM order_items oi WHERE oi.prod_pid = p.pid) AS total_sold, (SELECT COALESCE(SUM(oi.qty_ordered), 0)
FROM order_items oi
JOIN _order o ON oi.order_id = o.order_id
WHERE oi.prod_pid = p.pid AND o.order_status >= 20) AS total_sold,
pls.date_sold as date_last_sold, pls.date_sold as date_last_sold,
(SELECT iid FROM product_images WHERE pid = p.pid AND \`order\` = 255 LIMIT 1) AS primary_iid,
GROUP_CONCAT(DISTINCT CASE GROUP_CONCAT(DISTINCT CASE
WHEN pc.cat_id IS NOT NULL WHEN pc.cat_id IS NOT NULL
AND pc.type IN (10, 20, 11, 21, 12, 13) AND pc.type IN (10, 20, 11, 21, 12, 13)
@@ -448,9 +432,11 @@ async function materializeCalculations(prodConnection, localConnection, incremen
pcp.date_deactive > ? OR pcp.date_deactive > ? OR
pcp.date_active > ? OR pcp.date_active > ? OR
pnb.date_updated > ? pnb.date_updated > ?
-- Add condition for product_images changes if needed for incremental updates
-- OR EXISTS (SELECT 1 FROM product_images pi WHERE pi.pid = p.pid AND pi.stamp > ?)
` : 'TRUE'} ` : 'TRUE'}
GROUP BY p.pid GROUP BY p.pid
`, incrementalUpdate ? [lastSyncTime, lastSyncTime, lastSyncTime, lastSyncTime, lastSyncTime] : []); `, incrementalUpdate ? [lastSyncTime, lastSyncTime, lastSyncTime, lastSyncTime, lastSyncTime /*, lastSyncTime */] : []);
outputProgress({ outputProgress({
status: "running", status: "running",
@@ -464,15 +450,13 @@ async function materializeCalculations(prodConnection, localConnection, incremen
await withRetry(async () => { await withRetry(async () => {
const placeholders = batch.map((_, idx) => { const placeholders = batch.map((_, idx) => {
const base = idx * 47; // 47 columns const base = idx * 48; // 48 columns
return `(${Array.from({ length: 47 }, (_, i) => `$${base + i + 1}`).join(', ')})`; return `(${Array.from({ length: 48 }, (_, i) => `$${base + i + 1}`).join(', ')})`;
}).join(','); }).join(',');
// Process image URLs for the batch const values = batch.flatMap(row => {
const processedValues = []; const imageUrls = getImageUrls(row.pid, row.primary_iid || 1);
for (const row of batch) { return [
const imageUrls = await getImageUrls(row.pid, prodConnection);
processedValues.push([
row.pid, row.pid,
row.title, row.title,
row.description, row.description,
@@ -515,15 +499,14 @@ async function materializeCalculations(prodConnection, localConnection, incremen
row.baskets, row.baskets,
row.notifies, row.notifies,
validateDate(row.date_last_sold), validateDate(row.date_last_sold),
row.primary_iid,
imageUrls.image, imageUrls.image,
imageUrls.image_175, imageUrls.image_175,
imageUrls.image_full, imageUrls.image_full,
null, null,
null null
]); ];
} });
const values = processedValues.flat();
await localConnection.query(` await localConnection.query(`
INSERT INTO temp_products ( INSERT INTO temp_products (
@@ -533,7 +516,7 @@ async function materializeCalculations(prodConnection, localConnection, incremen
landing_cost_price, barcode, harmonized_tariff_code, updated_at, visible, landing_cost_price, barcode, harmonized_tariff_code, updated_at, visible,
managing_stock, replenishable, permalink, moq, uom, rating, reviews, managing_stock, replenishable, permalink, moq, uom, rating, reviews,
weight, length, width, height, country_of_origin, location, total_sold, weight, length, width, height, country_of_origin, location, total_sold,
baskets, notifies, date_last_sold, image, image_175, image_full, options, tags baskets, notifies, date_last_sold, primary_iid, image, image_175, image_full, options, tags
) VALUES ${placeholders} ) VALUES ${placeholders}
ON CONFLICT (pid) DO UPDATE SET ON CONFLICT (pid) DO UPDATE SET
title = EXCLUDED.title, title = EXCLUDED.title,
@@ -576,6 +559,7 @@ async function materializeCalculations(prodConnection, localConnection, incremen
baskets = EXCLUDED.baskets, baskets = EXCLUDED.baskets,
notifies = EXCLUDED.notifies, notifies = EXCLUDED.notifies,
date_last_sold = EXCLUDED.date_last_sold, date_last_sold = EXCLUDED.date_last_sold,
primary_iid = EXCLUDED.primary_iid,
image = EXCLUDED.image, image = EXCLUDED.image,
image_175 = EXCLUDED.image_175, image_175 = EXCLUDED.image_175,
image_full = EXCLUDED.image_full, image_full = EXCLUDED.image_full,
@@ -674,6 +658,7 @@ async function importProducts(prodConnection, localConnection, incrementalUpdate
t.baskets, t.baskets,
t.notifies, t.notifies,
t.date_last_sold, t.date_last_sold,
t.primary_iid,
t.image, t.image,
t.image_175, t.image_175,
t.image_full, t.image_full,
@@ -695,11 +680,9 @@ async function importProducts(prodConnection, localConnection, incrementalUpdate
return `(${Array.from({ length: 47 }, (_, i) => `$${base + i + 1}`).join(', ')})`; return `(${Array.from({ length: 47 }, (_, i) => `$${base + i + 1}`).join(', ')})`;
}).join(','); }).join(',');
// Process image URLs for the batch const values = batch.flatMap(row => {
const processedValues = []; const imageUrls = getImageUrls(row.pid, row.primary_iid || 1);
for (const row of batch) { return [
const imageUrls = await getImageUrls(row.pid, prodConnection);
processedValues.push([
row.pid, row.pid,
row.title, row.title,
row.description, row.description,
@@ -747,10 +730,8 @@ async function importProducts(prodConnection, localConnection, incrementalUpdate
imageUrls.image_full, imageUrls.image_full,
row.options, row.options,
row.tags row.tags
]); ];
} });
const values = processedValues.flat();
const [result] = await localConnection.query(` const [result] = await localConnection.query(`
WITH upserted AS ( WITH upserted AS (

View File

@@ -31,7 +31,8 @@ BEGIN
p.stock_quantity as current_stock, -- Use actual current stock for forecast base p.stock_quantity as current_stock, -- Use actual current stock for forecast base
p.created_at, p.first_received, p.date_last_sold, p.created_at, p.first_received, p.date_last_sold,
p.moq, p.moq,
p.uom p.uom,
p.total_sold as historical_total_sold -- Add historical total_sold from products table
FROM public.products p FROM public.products p
), ),
OnOrderInfo AS ( OnOrderInfo AS (
@@ -99,9 +100,30 @@ BEGIN
AVG(CASE WHEN snapshot_date BETWEEN _calculation_date - INTERVAL '29 days' AND _calculation_date THEN eod_stock_retail END) AS avg_stock_retail_30d, AVG(CASE WHEN snapshot_date BETWEEN _calculation_date - INTERVAL '29 days' AND _calculation_date THEN eod_stock_retail END) AS avg_stock_retail_30d,
AVG(CASE WHEN snapshot_date BETWEEN _calculation_date - INTERVAL '29 days' AND _calculation_date THEN eod_stock_gross END) AS avg_stock_gross_30d, AVG(CASE WHEN snapshot_date BETWEEN _calculation_date - INTERVAL '29 days' AND _calculation_date THEN eod_stock_gross END) AS avg_stock_gross_30d,
-- Lifetime (Sum over ALL available snapshots up to calculation date) -- Lifetime (Using historical total from products table)
SUM(units_sold) AS lifetime_sales, (SELECT total_sold FROM public.products WHERE public.products.pid = daily_product_snapshots.pid) AS lifetime_sales,
SUM(net_revenue) AS lifetime_revenue, COALESCE(
-- Option 1: Use 30-day average price if available
CASE WHEN SUM(CASE WHEN snapshot_date >= _calculation_date - INTERVAL '29 days' AND snapshot_date <= _calculation_date THEN units_sold ELSE 0 END) > 0 THEN
(SELECT total_sold FROM public.products WHERE public.products.pid = daily_product_snapshots.pid) * (
SUM(CASE WHEN snapshot_date >= _calculation_date - INTERVAL '29 days' AND snapshot_date <= _calculation_date THEN net_revenue ELSE 0 END) /
NULLIF(SUM(CASE WHEN snapshot_date >= _calculation_date - INTERVAL '29 days' AND snapshot_date <= _calculation_date THEN units_sold ELSE 0 END), 0)
)
ELSE NULL END,
-- Option 2: Try 365-day average price if available
CASE WHEN SUM(CASE WHEN snapshot_date >= _calculation_date - INTERVAL '364 days' AND snapshot_date <= _calculation_date THEN units_sold ELSE 0 END) > 0 THEN
(SELECT total_sold FROM public.products WHERE public.products.pid = daily_product_snapshots.pid) * (
SUM(CASE WHEN snapshot_date >= _calculation_date - INTERVAL '364 days' AND snapshot_date <= _calculation_date THEN net_revenue ELSE 0 END) /
NULLIF(SUM(CASE WHEN snapshot_date >= _calculation_date - INTERVAL '364 days' AND snapshot_date <= _calculation_date THEN units_sold ELSE 0 END), 0)
)
ELSE NULL END,
-- Option 3: Use current price from products table
(SELECT total_sold * price FROM public.products WHERE public.products.pid = daily_product_snapshots.pid),
-- Option 4: Use regular price if current price might be zero
(SELECT total_sold * regular_price FROM public.products WHERE public.products.pid = daily_product_snapshots.pid),
-- Final fallback: Use accumulated revenue (less accurate for old products)
SUM(net_revenue)
) AS lifetime_revenue,
-- Yesterday (Sales for the specific _calculation_date) -- Yesterday (Sales for the specific _calculation_date)
SUM(CASE WHEN snapshot_date = _calculation_date THEN units_sold ELSE 0 END) as yesterday_sales SUM(CASE WHEN snapshot_date = _calculation_date THEN units_sold ELSE 0 END) as yesterday_sales

View File

@@ -1,4 +1,4 @@
-- Description: Calculates and updates daily aggregated product data for the current day. -- Description: Calculates and updates daily aggregated product data for recent days.
-- Uses UPSERT (INSERT ON CONFLICT UPDATE) for idempotency. -- Uses UPSERT (INSERT ON CONFLICT UPDATE) for idempotency.
-- Dependencies: Core import tables (products, orders, purchase_orders), calculate_status table. -- Dependencies: Core import tables (products, orders, purchase_orders), calculate_status table.
-- Frequency: Hourly (Run ~5-10 minutes after hourly data import completes). -- Frequency: Hourly (Run ~5-10 minutes after hourly data import completes).
@@ -8,211 +8,243 @@ DECLARE
_module_name TEXT := 'daily_snapshots'; _module_name TEXT := 'daily_snapshots';
_start_time TIMESTAMPTZ := clock_timestamp(); -- Time execution started _start_time TIMESTAMPTZ := clock_timestamp(); -- Time execution started
_last_calc_time TIMESTAMPTZ; _last_calc_time TIMESTAMPTZ;
_target_date DATE := CURRENT_DATE; -- Always recalculate today for simplicity with hourly runs _target_date DATE; -- Will be set in the loop
_total_records INT := 0; _total_records INT := 0;
_has_orders BOOLEAN := FALSE; _has_orders BOOLEAN := FALSE;
_process_days INT := 5; -- Number of days to check/process (today plus previous 4 days)
_day_counter INT;
_missing_days INT[] := ARRAY[]::INT[]; -- Array to store days with missing or incomplete data
BEGIN BEGIN
-- Get the timestamp before the last successful run of this module -- Get the timestamp before the last successful run of this module
SELECT last_calculation_timestamp INTO _last_calc_time SELECT last_calculation_timestamp INTO _last_calc_time
FROM public.calculate_status FROM public.calculate_status
WHERE module_name = _module_name; WHERE module_name = _module_name;
RAISE NOTICE 'Running % for date %. Start Time: %', _module_name, _target_date, _start_time; RAISE NOTICE 'Running % script. Start Time: %', _module_name, _start_time;
-- CRITICAL FIX: Check if we have any orders or receiving activity for today
-- to prevent creating artificial records when no real activity exists
SELECT EXISTS (
SELECT 1 FROM public.orders WHERE date::date = _target_date
UNION
SELECT 1 FROM public.purchase_orders
WHERE date::date = _target_date
OR EXISTS (
SELECT 1 FROM jsonb_array_elements(receiving_history) AS rh
WHERE jsonb_typeof(receiving_history) = 'array'
AND (
(rh->>'date')::date = _target_date OR
(rh->>'received_at')::date = _target_date OR
(rh->>'receipt_date')::date = _target_date
)
)
LIMIT 1
) INTO _has_orders;
-- If no orders or receiving activity found for today, log and exit -- First, check which days need processing by comparing orders data with snapshot data
IF NOT _has_orders THEN FOR _day_counter IN 0..(_process_days-1) LOOP
RAISE NOTICE 'No orders or receiving activity found for % - skipping daily snapshot creation', _target_date; _target_date := CURRENT_DATE - (_day_counter * INTERVAL '1 day');
-- Still update the calculate_status to prevent repeated attempts -- Check if this date needs updating by comparing orders to snapshot data
-- If the date has orders but not enough snapshots, or if snapshots show zero sales but orders exist, it's incomplete
SELECT
CASE WHEN (
-- We have orders for this date but not enough snapshots, or snapshots with wrong total
(EXISTS (SELECT 1 FROM public.orders WHERE date::date = _target_date) AND
(
-- No snapshots exist for this date
NOT EXISTS (SELECT 1 FROM public.daily_product_snapshots WHERE snapshot_date = _target_date) OR
-- Or snapshots show zero sales but orders exist
(SELECT COALESCE(SUM(units_sold), 0) FROM public.daily_product_snapshots WHERE snapshot_date = _target_date) = 0 OR
-- Or the count of snapshot records is significantly less than distinct products in orders
(SELECT COUNT(*) FROM public.daily_product_snapshots WHERE snapshot_date = _target_date) <
(SELECT COUNT(DISTINCT pid) FROM public.orders WHERE date::date = _target_date) * 0.8
)
)
) THEN TRUE ELSE FALSE END
INTO _has_orders;
IF _has_orders THEN
-- This day needs processing - add to our array
_missing_days := _missing_days || _day_counter;
RAISE NOTICE 'Day % needs updating (incomplete or missing data)', _target_date;
END IF;
END LOOP;
-- If no days need updating, exit early
IF array_length(_missing_days, 1) IS NULL THEN
RAISE NOTICE 'No days need updating - all snapshot data appears complete';
-- Still update the calculate_status to record this run
UPDATE public.calculate_status UPDATE public.calculate_status
SET last_calculation_timestamp = _start_time SET last_calculation_timestamp = _start_time
WHERE module_name = _module_name; WHERE module_name = _module_name;
RETURN; -- Exit without creating snapshots RETURN;
END IF; END IF;
RAISE NOTICE 'Need to update % days with missing or incomplete data', array_length(_missing_days, 1);
-- IMPORTANT: First delete any existing data for this date to prevent duplication -- Process only the days that need updating
DELETE FROM public.daily_product_snapshots FOREACH _day_counter IN ARRAY _missing_days LOOP
WHERE snapshot_date = _target_date; _target_date := CURRENT_DATE - (_day_counter * INTERVAL '1 day');
RAISE NOTICE 'Processing date: %', _target_date;
-- IMPORTANT: First delete any existing data for this date to prevent duplication
DELETE FROM public.daily_product_snapshots
WHERE snapshot_date = _target_date;
-- Proceed with calculating daily metrics only for products with actual activity -- Proceed with calculating daily metrics only for products with actual activity
WITH SalesData AS ( WITH SalesData AS (
SELECT SELECT
p.pid, p.pid,
p.sku, p.sku,
-- Track number of orders to ensure we have real data -- Track number of orders to ensure we have real data
COUNT(o.id) as order_count, COUNT(o.id) as order_count,
-- Aggregate Sales (Quantity > 0, Status not Canceled/Returned) -- Aggregate Sales (Quantity > 0, Status not Canceled/Returned)
COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN o.quantity ELSE 0 END), 0) AS units_sold, COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN o.quantity ELSE 0 END), 0) AS units_sold,
COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN o.price * o.quantity ELSE 0 END), 0.00) AS gross_revenue_unadjusted, -- Before discount COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN o.price * o.quantity ELSE 0 END), 0.00) AS gross_revenue_unadjusted, -- Before discount
COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN o.discount ELSE 0 END), 0.00) AS discounts, COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN o.discount ELSE 0 END), 0.00) AS discounts,
COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN COALESCE(o.costeach, p.landing_cost_price, p.cost_price) * o.quantity ELSE 0 END), 0.00) AS cogs, COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN COALESCE(o.costeach, p.landing_cost_price, p.cost_price) * o.quantity ELSE 0 END), 0.00) AS cogs,
COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN p.regular_price * o.quantity ELSE 0 END), 0.00) AS gross_regular_revenue, -- Use current regular price for simplicity here COALESCE(SUM(CASE WHEN o.quantity > 0 AND COALESCE(o.status, 'pending') NOT IN ('canceled', 'returned') THEN p.regular_price * o.quantity ELSE 0 END), 0.00) AS gross_regular_revenue, -- Use current regular price for simplicity here
-- Aggregate Returns (Quantity < 0 or Status = Returned) -- Aggregate Returns (Quantity < 0 or Status = Returned)
COALESCE(SUM(CASE WHEN o.quantity < 0 OR COALESCE(o.status, 'pending') = 'returned' THEN ABS(o.quantity) ELSE 0 END), 0) AS units_returned, COALESCE(SUM(CASE WHEN o.quantity < 0 OR COALESCE(o.status, 'pending') = 'returned' THEN ABS(o.quantity) ELSE 0 END), 0) AS units_returned,
COALESCE(SUM(CASE WHEN o.quantity < 0 OR COALESCE(o.status, 'pending') = 'returned' THEN o.price * ABS(o.quantity) ELSE 0 END), 0.00) AS returns_revenue COALESCE(SUM(CASE WHEN o.quantity < 0 OR COALESCE(o.status, 'pending') = 'returned' THEN o.price * ABS(o.quantity) ELSE 0 END), 0.00) AS returns_revenue
FROM public.products p -- Start from products to include those with no orders today FROM public.products p -- Start from products to include those with no orders today
LEFT JOIN public.orders o JOIN public.orders o -- Changed to INNER JOIN to only process products with orders
ON p.pid = o.pid ON p.pid = o.pid
AND o.date::date = _target_date -- Cast to date to ensure compatibility regardless of original type AND o.date::date = _target_date -- Cast to date to ensure compatibility regardless of original type
GROUP BY p.pid, p.sku GROUP BY p.pid, p.sku
HAVING COUNT(o.id) > 0 -- CRITICAL: Only include products with actual orders -- No HAVING clause here - we always want to include all orders
), ),
ReceivingData AS ( ReceivingData AS (
SELECT SELECT
po.pid, po.pid,
-- Track number of POs to ensure we have real data -- Track number of POs to ensure we have real data
COUNT(po.po_id) as po_count, COUNT(po.po_id) as po_count,
-- Prioritize the actual table fields over the JSON data -- Prioritize the actual table fields over the JSON data
COALESCE( COALESCE(
-- First try the received field from purchase_orders table -- First try the received field from purchase_orders table
SUM(CASE WHEN po.date::date = _target_date THEN po.received ELSE 0 END), SUM(CASE WHEN po.date::date = _target_date THEN po.received ELSE 0 END),
-- Otherwise fall back to the receiving_history JSON as secondary source
SUM(
CASE
WHEN (rh.item->>'date')::date = _target_date THEN (rh.item->>'qty')::numeric
WHEN (rh.item->>'received_at')::date = _target_date THEN (rh.item->>'qty')::numeric
WHEN (rh.item->>'receipt_date')::date = _target_date THEN (rh.item->>'qty')::numeric
ELSE 0
END
),
0
) AS units_received,
-- Otherwise fall back to the receiving_history JSON as secondary source COALESCE(
SUM( -- First try the actual cost_price from purchase_orders
CASE SUM(CASE WHEN po.date::date = _target_date THEN po.received * po.cost_price ELSE 0 END),
WHEN (rh.item->>'date')::date = _target_date THEN (rh.item->>'qty')::numeric
WHEN (rh.item->>'received_at')::date = _target_date THEN (rh.item->>'qty')::numeric -- Otherwise fall back to receiving_history JSON
WHEN (rh.item->>'receipt_date')::date = _target_date THEN (rh.item->>'qty')::numeric SUM(
ELSE 0 CASE
END WHEN (rh.item->>'date')::date = _target_date THEN (rh.item->>'qty')::numeric
), WHEN (rh.item->>'received_at')::date = _target_date THEN (rh.item->>'qty')::numeric
0 WHEN (rh.item->>'receipt_date')::date = _target_date THEN (rh.item->>'qty')::numeric
) AS units_received, ELSE 0
END
COALESCE( * COALESCE((rh.item->>'cost')::numeric, po.cost_price)
-- First try the actual cost_price from purchase_orders ),
SUM(CASE WHEN po.date::date = _target_date THEN po.received * po.cost_price ELSE 0 END), 0.00
) AS cost_received
-- Otherwise fall back to receiving_history JSON FROM public.purchase_orders po
SUM( LEFT JOIN LATERAL jsonb_array_elements(po.receiving_history) AS rh(item) ON
CASE jsonb_typeof(po.receiving_history) = 'array' AND
WHEN (rh.item->>'date')::date = _target_date THEN (rh.item->>'qty')::numeric jsonb_array_length(po.receiving_history) > 0 AND
WHEN (rh.item->>'received_at')::date = _target_date THEN (rh.item->>'qty')::numeric (
WHEN (rh.item->>'receipt_date')::date = _target_date THEN (rh.item->>'qty')::numeric (rh.item->>'date')::date = _target_date OR
ELSE 0 (rh.item->>'received_at')::date = _target_date OR
END (rh.item->>'receipt_date')::date = _target_date
* COALESCE((rh.item->>'cost')::numeric, po.cost_price) )
), -- Include POs with the current date or relevant receiving_history
0.00 WHERE
) AS cost_received po.date::date = _target_date OR
FROM public.purchase_orders po jsonb_typeof(po.receiving_history) = 'array' AND
LEFT JOIN LATERAL jsonb_array_elements(po.receiving_history) AS rh(item) ON jsonb_array_length(po.receiving_history) > 0
jsonb_typeof(po.receiving_history) = 'array' AND GROUP BY po.pid
jsonb_array_length(po.receiving_history) > 0 AND -- CRITICAL: Only include products with actual receiving activity
( HAVING COUNT(po.po_id) > 0 OR SUM(
(rh.item->>'date')::date = _target_date OR CASE
(rh.item->>'received_at')::date = _target_date OR WHEN (rh.item->>'date')::date = _target_date THEN (rh.item->>'qty')::numeric
(rh.item->>'receipt_date')::date = _target_date WHEN (rh.item->>'received_at')::date = _target_date THEN (rh.item->>'qty')::numeric
) WHEN (rh.item->>'receipt_date')::date = _target_date THEN (rh.item->>'qty')::numeric
-- Include POs with the current date or relevant receiving_history ELSE 0
WHERE END
po.date::date = _target_date OR ) > 0
jsonb_typeof(po.receiving_history) = 'array' AND ),
jsonb_array_length(po.receiving_history) > 0 CurrentStock AS (
GROUP BY po.pid -- Select current stock values directly from products table
-- CRITICAL: Only include products with actual receiving activity SELECT
HAVING COUNT(po.po_id) > 0 OR SUM( pid,
CASE stock_quantity,
WHEN (rh.item->>'date')::date = _target_date THEN (rh.item->>'qty')::numeric COALESCE(landing_cost_price, cost_price, 0.00) as effective_cost_price,
WHEN (rh.item->>'received_at')::date = _target_date THEN (rh.item->>'qty')::numeric COALESCE(price, 0.00) as current_price,
WHEN (rh.item->>'receipt_date')::date = _target_date THEN (rh.item->>'qty')::numeric COALESCE(regular_price, 0.00) as current_regular_price
ELSE 0 FROM public.products
END ),
) > 0 ProductsWithActivity AS (
), -- Quick pre-filter to only process products with activity
CurrentStock AS ( SELECT DISTINCT pid
-- Select current stock values directly from products table FROM (
SELECT SELECT pid FROM SalesData
UNION
SELECT pid FROM ReceivingData
) a
)
-- Now insert records, but ONLY for products with actual activity
INSERT INTO public.daily_product_snapshots (
snapshot_date,
pid, pid,
stock_quantity, sku,
COALESCE(landing_cost_price, cost_price, 0.00) as effective_cost_price, eod_stock_quantity,
COALESCE(price, 0.00) as current_price, eod_stock_cost,
COALESCE(regular_price, 0.00) as current_regular_price eod_stock_retail,
FROM public.products eod_stock_gross,
) stockout_flag,
-- Now insert records, but ONLY for products with actual activity units_sold,
INSERT INTO public.daily_product_snapshots ( units_returned,
snapshot_date, gross_revenue,
pid, discounts,
sku, returns_revenue,
eod_stock_quantity, net_revenue,
eod_stock_cost, cogs,
eod_stock_retail, gross_regular_revenue,
eod_stock_gross, profit,
stockout_flag, units_received,
units_sold, cost_received,
units_returned, calculation_timestamp
gross_revenue, )
discounts, SELECT
returns_revenue, _target_date AS snapshot_date,
net_revenue, COALESCE(sd.pid, rd.pid) AS pid, -- Use sales or receiving PID
cogs, COALESCE(sd.sku, p.sku) AS sku, -- Get SKU from sales data or products table
gross_regular_revenue, -- Inventory Metrics (Using CurrentStock)
profit, cs.stock_quantity AS eod_stock_quantity,
units_received, cs.stock_quantity * cs.effective_cost_price AS eod_stock_cost,
cost_received, cs.stock_quantity * cs.current_price AS eod_stock_retail,
calculation_timestamp cs.stock_quantity * cs.current_regular_price AS eod_stock_gross,
) (cs.stock_quantity <= 0) AS stockout_flag,
SELECT -- Sales Metrics (From SalesData)
_target_date AS snapshot_date, COALESCE(sd.units_sold, 0),
COALESCE(sd.pid, rd.pid) AS pid, -- Use sales or receiving PID COALESCE(sd.units_returned, 0),
COALESCE(sd.sku, p.sku) AS sku, -- Get SKU from sales data or products table COALESCE(sd.gross_revenue_unadjusted, 0.00),
-- Inventory Metrics (Using CurrentStock) COALESCE(sd.discounts, 0.00),
cs.stock_quantity AS eod_stock_quantity, COALESCE(sd.returns_revenue, 0.00),
cs.stock_quantity * cs.effective_cost_price AS eod_stock_cost, COALESCE(sd.gross_revenue_unadjusted, 0.00) - COALESCE(sd.discounts, 0.00) AS net_revenue,
cs.stock_quantity * cs.current_price AS eod_stock_retail, COALESCE(sd.cogs, 0.00),
cs.stock_quantity * cs.current_regular_price AS eod_stock_gross, COALESCE(sd.gross_regular_revenue, 0.00),
(cs.stock_quantity <= 0) AS stockout_flag, (COALESCE(sd.gross_revenue_unadjusted, 0.00) - COALESCE(sd.discounts, 0.00)) - COALESCE(sd.cogs, 0.00) AS profit, -- Basic profit: Net Revenue - COGS
-- Sales Metrics (From SalesData) -- Receiving Metrics (From ReceivingData)
COALESCE(sd.units_sold, 0), COALESCE(rd.units_received, 0),
COALESCE(sd.units_returned, 0), COALESCE(rd.cost_received, 0.00),
COALESCE(sd.gross_revenue_unadjusted, 0.00), _start_time -- Timestamp of this calculation run
COALESCE(sd.discounts, 0.00), FROM SalesData sd
COALESCE(sd.returns_revenue, 0.00), FULL OUTER JOIN ReceivingData rd ON sd.pid = rd.pid
COALESCE(sd.gross_revenue_unadjusted, 0.00) - COALESCE(sd.discounts, 0.00) AS net_revenue, JOIN ProductsWithActivity pwa ON COALESCE(sd.pid, rd.pid) = pwa.pid
COALESCE(sd.cogs, 0.00), LEFT JOIN public.products p ON COALESCE(sd.pid, rd.pid) = p.pid
COALESCE(sd.gross_regular_revenue, 0.00), LEFT JOIN CurrentStock cs ON COALESCE(sd.pid, rd.pid) = cs.pid
(COALESCE(sd.gross_revenue_unadjusted, 0.00) - COALESCE(sd.discounts, 0.00)) - COALESCE(sd.cogs, 0.00) AS profit, -- Basic profit: Net Revenue - COGS WHERE p.pid IS NOT NULL; -- Ensure we only insert for existing products
-- Receiving Metrics (From ReceivingData)
COALESCE(rd.units_received, 0),
COALESCE(rd.cost_received, 0.00),
_start_time -- Timestamp of this calculation run
FROM SalesData sd
FULL OUTER JOIN ReceivingData rd ON sd.pid = rd.pid
LEFT JOIN public.products p ON COALESCE(sd.pid, rd.pid) = p.pid
LEFT JOIN CurrentStock cs ON COALESCE(sd.pid, rd.pid) = cs.pid
WHERE p.pid IS NOT NULL; -- Ensure we only insert for existing products
-- Get the total number of records inserted -- Get the total number of records inserted for this date
GET DIAGNOSTICS _total_records = ROW_COUNT; GET DIAGNOSTICS _total_records = ROW_COUNT;
RAISE NOTICE 'Created % daily snapshot records for % with sales/receiving activity', _total_records, _target_date; RAISE NOTICE 'Created % daily snapshot records for % with sales/receiving activity', _total_records, _target_date;
END LOOP;
-- Update the status table with the timestamp from the START of this run -- Update the status table with the timestamp from the START of this run
UPDATE public.calculate_status UPDATE public.calculate_status
SET last_calculation_timestamp = _start_time SET last_calculation_timestamp = _start_time
WHERE module_name = _module_name; WHERE module_name = _module_name;
RAISE NOTICE 'Finished % for date %. Duration: %', _module_name, _target_date, clock_timestamp() - _start_time; RAISE NOTICE 'Finished % processing for multiple dates. Duration: %', _module_name, clock_timestamp() - _start_time;
END $$; END $$;

View File

@@ -57,6 +57,7 @@ BEGIN
p.created_at, p.created_at,
p.first_received, p.first_received,
p.date_last_sold, p.date_last_sold,
p.total_sold as historical_total_sold, -- Add historical total_sold from products table
p.uom -- Assuming UOM logic is handled elsewhere or simple (e.g., 1=each) p.uom -- Assuming UOM logic is handled elsewhere or simple (e.g., 1=each)
FROM public.products p FROM public.products p
), ),
@@ -255,9 +256,25 @@ BEGIN
sa.stockout_days_30d, sa.sales_365d, sa.revenue_365d, sa.stockout_days_30d, sa.sales_365d, sa.revenue_365d,
sa.avg_stock_units_30d, sa.avg_stock_cost_30d, sa.avg_stock_retail_30d, sa.avg_stock_gross_30d, sa.avg_stock_units_30d, sa.avg_stock_cost_30d, sa.avg_stock_retail_30d, sa.avg_stock_gross_30d,
sa.received_qty_30d, sa.received_cost_30d, sa.received_qty_30d, sa.received_cost_30d,
-- Use total counts for lifetime values to ensure we have data even with limited history -- Use total_sold from products table as the source of truth for lifetime sales
COALESCE(sa.total_units_sold, sa.lifetime_sales) AS lifetime_sales, -- This includes all historical data from the production database
COALESCE(sa.total_net_revenue, sa.lifetime_revenue) AS lifetime_revenue, ci.historical_total_sold AS lifetime_sales,
COALESCE(
-- Option 1: Use 30-day average price if available
CASE WHEN sa.sales_30d > 0 THEN
ci.historical_total_sold * (sa.revenue_30d / NULLIF(sa.sales_30d, 0))
ELSE NULL END,
-- Option 2: Try 365-day average price if available
CASE WHEN sa.sales_365d > 0 THEN
ci.historical_total_sold * (sa.revenue_365d / NULLIF(sa.sales_365d, 0))
ELSE NULL END,
-- Option 3: Use current price as a reasonable estimate
ci.historical_total_sold * ci.current_price,
-- Option 4: Use regular price if current price might be zero
ci.historical_total_sold * ci.current_regular_price,
-- Final fallback: Use accumulated revenue (this is less accurate for old products)
sa.total_net_revenue
) AS lifetime_revenue,
fpm.first_7_days_sales, fpm.first_7_days_revenue, fpm.first_30_days_sales, fpm.first_30_days_revenue, fpm.first_7_days_sales, fpm.first_7_days_revenue, fpm.first_30_days_sales, fpm.first_30_days_revenue,
fpm.first_60_days_sales, fpm.first_60_days_revenue, fpm.first_90_days_sales, fpm.first_90_days_revenue, fpm.first_60_days_sales, fpm.first_60_days_revenue, fpm.first_90_days_sales, fpm.first_90_days_revenue,

View File

@@ -13,6 +13,22 @@ const dbConfig = {
port: process.env.DB_PORT || 5432 port: process.env.DB_PORT || 5432
}; };
// Tables to always protect from being dropped
const PROTECTED_TABLES = [
'users',
'permissions',
'user_permissions',
'calculate_history',
'import_history',
'ai_prompts',
'ai_validation_performance',
'templates',
'reusable_images',
'imported_daily_inventory',
'imported_product_stat_history',
'imported_product_current_prices'
];
// Helper function to output progress in JSON format // Helper function to output progress in JSON format
function outputProgress(data) { function outputProgress(data) {
if (!data.status) { if (!data.status) {
@@ -33,17 +49,6 @@ const CORE_TABLES = [
'product_categories' 'product_categories'
]; ];
// Config tables that must be created
const CONFIG_TABLES = [
'stock_thresholds',
'lead_time_thresholds',
'sales_velocity_config',
'abc_classification_config',
'safety_stock_config',
'sales_seasonality',
'turnover_config'
];
// Split SQL into individual statements // Split SQL into individual statements
function splitSQLStatements(sql) { function splitSQLStatements(sql) {
// First, normalize line endings // First, normalize line endings
@@ -184,8 +189,8 @@ async function resetDatabase() {
SELECT string_agg(tablename, ', ') as tables SELECT string_agg(tablename, ', ') as tables
FROM pg_tables FROM pg_tables
WHERE schemaname = 'public' WHERE schemaname = 'public'
AND tablename NOT IN ('users', 'permissions', 'user_permissions', 'calculate_history', 'import_history', 'ai_prompts', 'ai_validation_performance', 'templates', 'reusable_images'); AND tablename NOT IN (SELECT unnest($1::text[]));
`); `, [PROTECTED_TABLES]);
if (!tablesResult.rows[0].tables) { if (!tablesResult.rows[0].tables) {
outputProgress({ outputProgress({
@@ -204,7 +209,7 @@ async function resetDatabase() {
// Drop all tables except users // Drop all tables except users
const tables = tablesResult.rows[0].tables.split(', '); const tables = tablesResult.rows[0].tables.split(', ');
for (const table of tables) { for (const table of tables) {
if (!['users', 'reusable_images'].includes(table)) { if (!PROTECTED_TABLES.includes(table)) {
await client.query(`DROP TABLE IF EXISTS "${table}" CASCADE`); await client.query(`DROP TABLE IF EXISTS "${table}" CASCADE`);
} }
} }
@@ -259,7 +264,9 @@ async function resetDatabase() {
'category_metrics', 'category_metrics',
'brand_metrics', 'brand_metrics',
'sales_forecasts', 'sales_forecasts',
'abc_classification' 'abc_classification',
'daily_snapshots',
'periodic_metrics'
) )
`); `);
} }
@@ -301,51 +308,67 @@ async function resetDatabase() {
} }
}); });
for (let i = 0; i < statements.length; i++) { // Start a transaction for better error handling
const stmt = statements[i]; await client.query('BEGIN');
try { try {
const result = await client.query(stmt); for (let i = 0; i < statements.length; i++) {
const stmt = statements[i];
// Verify if table was created (if this was a CREATE TABLE statement) try {
if (stmt.trim().toLowerCase().startsWith('create table')) { const result = await client.query(stmt);
const tableName = stmt.match(/create\s+table\s+(?:if\s+not\s+exists\s+)?["]?(\w+)["]?/i)?.[1];
if (tableName) { // Verify if table was created (if this was a CREATE TABLE statement)
const tableExists = await client.query(` if (stmt.trim().toLowerCase().startsWith('create table')) {
SELECT COUNT(*) as count const tableName = stmt.match(/create\s+table\s+(?:if\s+not\s+exists\s+)?["]?(\w+)["]?/i)?.[1];
FROM information_schema.tables if (tableName) {
WHERE table_schema = 'public' const tableExists = await client.query(`
AND table_name = $1 SELECT COUNT(*) as count
`, [tableName]); FROM information_schema.tables
WHERE table_schema = 'public'
outputProgress({ AND table_name = $1
operation: 'Table Creation Verification', `, [tableName]);
message: {
table: tableName, outputProgress({
exists: tableExists.rows[0].count > 0 operation: 'Table Creation Verification',
} message: {
}); table: tableName,
exists: tableExists.rows[0].count > 0
}
});
}
} }
outputProgress({
operation: 'SQL Progress',
message: {
statement: i + 1,
total: statements.length,
preview: stmt.substring(0, 100) + (stmt.length > 100 ? '...' : ''),
rowCount: result.rowCount
}
});
// Commit in chunks of 10 statements to avoid long-running transactions
if (i > 0 && i % 10 === 0) {
await client.query('COMMIT');
await client.query('BEGIN');
}
} catch (sqlError) {
await client.query('ROLLBACK');
outputProgress({
status: 'error',
operation: 'SQL Error',
error: sqlError.message,
statement: stmt,
statementNumber: i + 1
});
throw sqlError;
} }
outputProgress({
operation: 'SQL Progress',
message: {
statement: i + 1,
total: statements.length,
preview: stmt.substring(0, 100) + (stmt.length > 100 ? '...' : ''),
rowCount: result.rowCount
}
});
} catch (sqlError) {
outputProgress({
status: 'error',
operation: 'SQL Error',
error: sqlError.message,
statement: stmt,
statementNumber: i + 1
});
throw sqlError;
} }
// Commit the final transaction
await client.query('COMMIT');
} catch (error) {
await client.query('ROLLBACK');
throw error;
} }
// Verify core tables were created // Verify core tables were created
@@ -383,11 +406,25 @@ async function resetDatabase() {
operation: 'Running config setup', operation: 'Running config setup',
message: 'Creating configuration tables...' message: 'Creating configuration tables...'
}); });
const configSchemaSQL = fs.readFileSync( const configSchemaPath = path.join(__dirname, '../db/config-schema-new.sql');
path.join(__dirname, '../db/config-schema-new.sql'),
'utf8'
);
// Verify file exists
if (!fs.existsSync(configSchemaPath)) {
throw new Error(`Config schema file not found at: ${configSchemaPath}`);
}
const configSchemaSQL = fs.readFileSync(configSchemaPath, 'utf8');
outputProgress({
operation: 'Config Schema file',
message: {
path: configSchemaPath,
exists: fs.existsSync(configSchemaPath),
size: fs.statSync(configSchemaPath).size,
firstFewLines: configSchemaSQL.split('\n').slice(0, 5).join('\n')
}
});
// Execute config schema statements one at a time // Execute config schema statements one at a time
const configStatements = splitSQLStatements(configSchemaSQL); const configStatements = splitSQLStatements(configSchemaSQL);
outputProgress({ outputProgress({
@@ -401,30 +438,46 @@ async function resetDatabase() {
} }
}); });
for (let i = 0; i < configStatements.length; i++) { // Start a transaction for better error handling
const stmt = configStatements[i]; await client.query('BEGIN');
try { try {
const result = await client.query(stmt); for (let i = 0; i < configStatements.length; i++) {
const stmt = configStatements[i];
outputProgress({ try {
operation: 'Config SQL Progress', const result = await client.query(stmt);
message: {
statement: i + 1, outputProgress({
total: configStatements.length, operation: 'Config SQL Progress',
preview: stmt.substring(0, 100) + (stmt.length > 100 ? '...' : ''), message: {
rowCount: result.rowCount statement: i + 1,
total: configStatements.length,
preview: stmt.substring(0, 100) + (stmt.length > 100 ? '...' : ''),
rowCount: result.rowCount
}
});
// Commit in chunks of 10 statements to avoid long-running transactions
if (i > 0 && i % 10 === 0) {
await client.query('COMMIT');
await client.query('BEGIN');
} }
}); } catch (sqlError) {
} catch (sqlError) { await client.query('ROLLBACK');
outputProgress({ outputProgress({
status: 'error', status: 'error',
operation: 'Config SQL Error', operation: 'Config SQL Error',
error: sqlError.message, error: sqlError.message,
statement: stmt, statement: stmt,
statementNumber: i + 1 statementNumber: i + 1
}); });
throw sqlError; throw sqlError;
}
} }
// Commit the final transaction
await client.query('COMMIT');
} catch (error) {
await client.query('ROLLBACK');
throw error;
} }
// Read and execute metrics schema (metrics tables) // Read and execute metrics schema (metrics tables)
@@ -432,11 +485,25 @@ async function resetDatabase() {
operation: 'Running metrics setup', operation: 'Running metrics setup',
message: 'Creating metrics tables...' message: 'Creating metrics tables...'
}); });
const metricsSchemaSQL = fs.readFileSync( const metricsSchemaPath = path.join(__dirname, '../db/metrics-schema-new.sql');
path.join(__dirname, '../db/metrics-schema-new.sql'),
'utf8'
);
// Verify file exists
if (!fs.existsSync(metricsSchemaPath)) {
throw new Error(`Metrics schema file not found at: ${metricsSchemaPath}`);
}
const metricsSchemaSQL = fs.readFileSync(metricsSchemaPath, 'utf8');
outputProgress({
operation: 'Metrics Schema file',
message: {
path: metricsSchemaPath,
exists: fs.existsSync(metricsSchemaPath),
size: fs.statSync(metricsSchemaPath).size,
firstFewLines: metricsSchemaSQL.split('\n').slice(0, 5).join('\n')
}
});
// Execute metrics schema statements one at a time // Execute metrics schema statements one at a time
const metricsStatements = splitSQLStatements(metricsSchemaSQL); const metricsStatements = splitSQLStatements(metricsSchemaSQL);
outputProgress({ outputProgress({
@@ -450,30 +517,46 @@ async function resetDatabase() {
} }
}); });
for (let i = 0; i < metricsStatements.length; i++) { // Start a transaction for better error handling
const stmt = metricsStatements[i]; await client.query('BEGIN');
try { try {
const result = await client.query(stmt); for (let i = 0; i < metricsStatements.length; i++) {
const stmt = metricsStatements[i];
outputProgress({ try {
operation: 'Metrics SQL Progress', const result = await client.query(stmt);
message: {
statement: i + 1, outputProgress({
total: metricsStatements.length, operation: 'Metrics SQL Progress',
preview: stmt.substring(0, 100) + (stmt.length > 100 ? '...' : ''), message: {
rowCount: result.rowCount statement: i + 1,
total: metricsStatements.length,
preview: stmt.substring(0, 100) + (stmt.length > 100 ? '...' : ''),
rowCount: result.rowCount
}
});
// Commit in chunks of 10 statements to avoid long-running transactions
if (i > 0 && i % 10 === 0) {
await client.query('COMMIT');
await client.query('BEGIN');
} }
}); } catch (sqlError) {
} catch (sqlError) { await client.query('ROLLBACK');
outputProgress({ outputProgress({
status: 'error', status: 'error',
operation: 'Metrics SQL Error', operation: 'Metrics SQL Error',
error: sqlError.message, error: sqlError.message,
statement: stmt, statement: stmt,
statementNumber: i + 1 statementNumber: i + 1
}); });
throw sqlError; throw sqlError;
}
} }
// Commit the final transaction
await client.query('COMMIT');
} catch (error) {
await client.query('ROLLBACK');
throw error;
} }
outputProgress({ outputProgress({
@@ -490,6 +573,14 @@ async function resetDatabase() {
}); });
process.exit(1); process.exit(1);
} finally { } finally {
// Make sure to re-enable foreign key checks if they were disabled
try {
await client.query('SET session_replication_role = \'origin\'');
} catch (e) {
console.error('Error re-enabling foreign key checks:', e.message);
}
// Close the database connection
await client.end(); await client.end();
} }
} }

View File

@@ -31,7 +31,10 @@ const PROTECTED_TABLES = [
'ai_prompts', 'ai_prompts',
'ai_validation_performance', 'ai_validation_performance',
'templates', 'templates',
'reusable_images' 'reusable_images',
'imported_daily_inventory',
'imported_product_stat_history',
'imported_product_current_prices'
]; ];
// Split SQL into individual statements // Split SQL into individual statements

View File

@@ -51,83 +51,67 @@ router.get('/:id', async (req, res) => {
} }
}); });
// Get prompt by company // Get prompt by type (general, system, company_specific)
router.get('/company/:companyId', async (req, res) => { router.get('/by-type', async (req, res) => {
try { try {
const { companyId } = req.params; const { type, company } = req.query;
const pool = req.app.locals.pool; const pool = req.app.locals.pool;
if (!pool) { if (!pool) {
throw new Error('Database pool not initialized'); throw new Error('Database pool not initialized');
} }
const result = await pool.query(` // Validate prompt type
SELECT * FROM ai_prompts if (!type || !['general', 'system', 'company_specific'].includes(type)) {
WHERE company = $1 return res.status(400).json({
`, [companyId]); error: 'Valid type query parameter is required (general, system, or company_specific)'
});
if (result.rows.length === 0) {
return res.status(404).json({ error: 'AI prompt not found for this company' });
}
res.json(result.rows[0]);
} catch (error) {
console.error('Error fetching AI prompt by company:', error);
res.status(500).json({
error: 'Failed to fetch AI prompt by company',
details: error instanceof Error ? error.message : 'Unknown error'
});
}
});
// Get general prompt
router.get('/type/general', async (req, res) => {
try {
const pool = req.app.locals.pool;
if (!pool) {
throw new Error('Database pool not initialized');
} }
const result = await pool.query(` // For company_specific type, company ID is required
SELECT * FROM ai_prompts if (type === 'company_specific' && !company) {
WHERE prompt_type = 'general' return res.status(400).json({
`); error: 'Company ID is required for company_specific prompt type'
});
if (result.rows.length === 0) {
return res.status(404).json({ error: 'General AI prompt not found' });
}
res.json(result.rows[0]);
} catch (error) {
console.error('Error fetching general AI prompt:', error);
res.status(500).json({
error: 'Failed to fetch general AI prompt',
details: error instanceof Error ? error.message : 'Unknown error'
});
}
});
// Get system prompt
router.get('/type/system', async (req, res) => {
try {
const pool = req.app.locals.pool;
if (!pool) {
throw new Error('Database pool not initialized');
} }
const result = await pool.query(` // For general and system types, company should not be provided
SELECT * FROM ai_prompts if ((type === 'general' || type === 'system') && company) {
WHERE prompt_type = 'system' return res.status(400).json({
`); error: 'Company ID should not be provided for general or system prompt types'
});
if (result.rows.length === 0) {
return res.status(404).json({ error: 'System AI prompt not found' });
} }
// Build the query based on the type
let query, params;
if (type === 'company_specific') {
query = 'SELECT * FROM ai_prompts WHERE prompt_type = $1 AND company = $2';
params = [type, company];
} else {
query = 'SELECT * FROM ai_prompts WHERE prompt_type = $1';
params = [type];
}
// Execute the query
const result = await pool.query(query, params);
// Check if any prompt was found
if (result.rows.length === 0) {
let errorMessage;
if (type === 'company_specific') {
errorMessage = `AI prompt not found for company ${company}`;
} else {
errorMessage = `${type.charAt(0).toUpperCase() + type.slice(1)} AI prompt not found`;
}
return res.status(404).json({ error: errorMessage });
}
// Return the first matching prompt
res.json(result.rows[0]); res.json(result.rows[0]);
} catch (error) { } catch (error) {
console.error('Error fetching system AI prompt:', error); console.error('Error fetching AI prompt by type:', error);
res.status(500).json({ res.status(500).json({
error: 'Failed to fetch system AI prompt', error: 'Failed to fetch AI prompt',
details: error instanceof Error ? error.message : 'Unknown error' details: error instanceof Error ? error.message : 'Unknown error'
}); });
} }

View File

@@ -6,6 +6,7 @@ const path = require("path");
const dotenv = require("dotenv"); const dotenv = require("dotenv");
const mysql = require('mysql2/promise'); const mysql = require('mysql2/promise');
const { Client } = require('ssh2'); const { Client } = require('ssh2');
const { getDbConnection } = require('../utils/dbConnection'); // Import the optimized connection function
// Ensure environment variables are loaded // Ensure environment variables are loaded
dotenv.config({ path: path.join(__dirname, "../../.env") }); dotenv.config({ path: path.join(__dirname, "../../.env") });
@@ -18,50 +19,6 @@ if (!process.env.OPENAI_API_KEY) {
console.error("Warning: OPENAI_API_KEY is not set in environment variables"); console.error("Warning: OPENAI_API_KEY is not set in environment variables");
} }
// Helper function to setup SSH tunnel to production database
async function setupSshTunnel() {
const sshConfig = {
host: process.env.PROD_SSH_HOST,
port: process.env.PROD_SSH_PORT || 22,
username: process.env.PROD_SSH_USER,
privateKey: process.env.PROD_SSH_KEY_PATH
? require('fs').readFileSync(process.env.PROD_SSH_KEY_PATH)
: undefined,
compress: true
};
const dbConfig = {
host: process.env.PROD_DB_HOST || 'localhost',
user: process.env.PROD_DB_USER,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
port: process.env.PROD_DB_PORT || 3306,
timezone: 'Z'
};
return new Promise((resolve, reject) => {
const ssh = new Client();
ssh.on('error', (err) => {
console.error('SSH connection error:', err);
reject(err);
});
ssh.on('ready', () => {
ssh.forwardOut(
'127.0.0.1',
0,
dbConfig.host,
dbConfig.port,
(err, stream) => {
if (err) reject(err);
resolve({ ssh, stream, dbConfig });
}
);
}).connect(sshConfig);
});
}
// Debug endpoint for viewing prompt // Debug endpoint for viewing prompt
router.post("/debug", async (req, res) => { router.post("/debug", async (req, res) => {
try { try {
@@ -195,16 +152,12 @@ async function generateDebugResponse(productsToUse, res) {
// Load taxonomy data first // Load taxonomy data first
console.log("Loading taxonomy data..."); console.log("Loading taxonomy data...");
try { try {
// Setup MySQL connection via SSH tunnel // Use optimized database connection
const tunnel = await setupSshTunnel(); const { connection, ssh: connSsh } = await getDbConnection();
ssh = tunnel.ssh; mysqlConnection = connection;
ssh = connSsh;
mysqlConnection = await mysql.createConnection({ console.log("MySQL connection established successfully using optimized connection");
...tunnel.dbConfig,
stream: tunnel.stream
});
console.log("MySQL connection established successfully");
taxonomy = await getTaxonomyData(mysqlConnection); taxonomy = await getTaxonomyData(mysqlConnection);
console.log("Successfully loaded taxonomy data"); console.log("Successfully loaded taxonomy data");
@@ -218,10 +171,6 @@ async function generateDebugResponse(productsToUse, res) {
errno: taxonomyError.errno || null, errno: taxonomyError.errno || null,
sql: taxonomyError.sql || null, sql: taxonomyError.sql || null,
}); });
} finally {
// Make sure we close the connection
if (mysqlConnection) await mysqlConnection.end();
if (ssh) ssh.end();
} }
// Verify the taxonomy data structure // Verify the taxonomy data structure
@@ -282,11 +231,8 @@ async function generateDebugResponse(productsToUse, res) {
console.log("Loading prompt..."); console.log("Loading prompt...");
// Setup a new connection for loading the prompt // Setup a new connection for loading the prompt
const promptTunnel = await setupSshTunnel(); // Use optimized connection instead of creating a new one
const promptConnection = await mysql.createConnection({ const { connection: promptConnection } = await getDbConnection();
...promptTunnel.dbConfig,
stream: promptTunnel.stream
});
try { try {
// Get the local PostgreSQL pool to fetch prompts // Get the local PostgreSQL pool to fetch prompts
@@ -296,7 +242,7 @@ async function generateDebugResponse(productsToUse, res) {
throw new Error("Database connection not available"); throw new Error("Database connection not available");
} }
// First, fetch the system prompt // First, fetch the system prompt using the consolidated endpoint approach
const systemPromptResult = await pool.query(` const systemPromptResult = await pool.query(`
SELECT * FROM ai_prompts SELECT * FROM ai_prompts
WHERE prompt_type = 'system' WHERE prompt_type = 'system'
@@ -311,7 +257,7 @@ async function generateDebugResponse(productsToUse, res) {
console.warn("⚠️ No system prompt found in database, will use default"); console.warn("⚠️ No system prompt found in database, will use default");
} }
// Then, fetch the general prompt // Then, fetch the general prompt using the consolidated endpoint approach
const generalPromptResult = await pool.query(` const generalPromptResult = await pool.query(`
SELECT * FROM ai_prompts SELECT * FROM ai_prompts
WHERE prompt_type = 'general' WHERE prompt_type = 'general'
@@ -458,7 +404,6 @@ async function generateDebugResponse(productsToUse, res) {
return response; return response;
} finally { } finally {
if (promptConnection) await promptConnection.end(); if (promptConnection) await promptConnection.end();
if (promptTunnel.ssh) promptTunnel.ssh.end();
} }
} catch (error) { } catch (error) {
console.error("Error generating debug response:", error); console.error("Error generating debug response:", error);
@@ -645,7 +590,7 @@ async function loadPrompt(connection, productsToValidate = null, appPool = null)
throw new Error("Database connection not available"); throw new Error("Database connection not available");
} }
// Fetch the system prompt // Fetch the system prompt using the consolidated endpoint approach
const systemPromptResult = await pool.query(` const systemPromptResult = await pool.query(`
SELECT * FROM ai_prompts SELECT * FROM ai_prompts
WHERE prompt_type = 'system' WHERE prompt_type = 'system'
@@ -662,7 +607,7 @@ async function loadPrompt(connection, productsToValidate = null, appPool = null)
console.warn("⚠️ No system prompt found in database, using default"); console.warn("⚠️ No system prompt found in database, using default");
} }
// Fetch the general prompt // Fetch the general prompt using the consolidated endpoint approach
const generalPromptResult = await pool.query(` const generalPromptResult = await pool.query(`
SELECT * FROM ai_prompts SELECT * FROM ai_prompts
WHERE prompt_type = 'general' WHERE prompt_type = 'general'
@@ -926,15 +871,11 @@ router.post("/validate", async (req, res) => {
let promptLength = 0; // Track prompt length for performance metrics let promptLength = 0; // Track prompt length for performance metrics
try { try {
// Setup MySQL connection via SSH tunnel // Use the optimized connection utility instead of direct SSH tunnel
console.log("🔄 Setting up connection to production database..."); console.log("🔄 Setting up connection to production database using optimized connection...");
const tunnel = await setupSshTunnel(); const { ssh: connSsh, connection: connDB } = await getDbConnection();
ssh = tunnel.ssh; ssh = connSsh;
connection = connDB;
connection = await mysql.createConnection({
...tunnel.dbConfig,
stream: tunnel.stream
});
console.log("🔄 MySQL connection established successfully"); console.log("🔄 MySQL connection established successfully");
@@ -1238,14 +1179,11 @@ router.get("/test-taxonomy", async (req, res) => {
let connection = null; let connection = null;
try { try {
// Setup MySQL connection via SSH tunnel // Use the optimized connection utility instead of direct SSH tunnel
const tunnel = await setupSshTunnel(); console.log("🔄 Setting up connection to production database using optimized connection...");
ssh = tunnel.ssh; const { ssh: connSsh, connection: connDB } = await getDbConnection();
ssh = connSsh;
connection = await mysql.createConnection({ connection = connDB;
...tunnel.dbConfig,
stream: tunnel.stream
});
console.log("MySQL connection established successfully for test"); console.log("MySQL connection established successfully for test");

View File

@@ -7,37 +7,33 @@ router.get('/stats', async (req, res) => {
const pool = req.app.locals.pool; const pool = req.app.locals.pool;
const { rows: [results] } = await pool.query(` const { rows: [results] } = await pool.query(`
SELECT WITH vendor_count AS (
COALESCE( SELECT COUNT(DISTINCT vendor_name) AS count
ROUND( FROM vendor_metrics
(SUM(o.price * o.quantity - p.cost_price * o.quantity) / ),
NULLIF(SUM(o.price * o.quantity), 0) * 100)::numeric, 1 category_count AS (
), SELECT COUNT(DISTINCT category_id) AS count
0 FROM category_metrics
) as profitMargin, ),
COALESCE( metrics_summary AS (
ROUND( SELECT
(AVG(p.price / NULLIF(p.cost_price, 0) - 1) * 100)::numeric, 1 AVG(margin_30d) AS avg_profit_margin,
), AVG(markup_30d) AS avg_markup,
0 AVG(stockturn_30d) AS avg_stock_turnover,
) as averageMarkup, AVG(asp_30d) AS avg_order_value
COALESCE( FROM product_metrics
ROUND( WHERE sales_30d > 0
(SUM(o.quantity) / NULLIF(AVG(p.stock_quantity), 0))::numeric, 2 )
), SELECT
0 COALESCE(ms.avg_profit_margin, 0) AS profitMargin,
) as stockTurnoverRate, COALESCE(ms.avg_markup, 0) AS averageMarkup,
COALESCE(COUNT(DISTINCT p.vendor), 0) as vendorCount, COALESCE(ms.avg_stock_turnover, 0) AS stockTurnoverRate,
COALESCE(COUNT(DISTINCT p.categories), 0) as categoryCount, COALESCE(vc.count, 0) AS vendorCount,
COALESCE( COALESCE(cc.count, 0) AS categoryCount,
ROUND( COALESCE(ms.avg_order_value, 0) AS averageOrderValue
AVG(o.price * o.quantity)::numeric, 2 FROM metrics_summary ms
), CROSS JOIN vendor_count vc
0 CROSS JOIN category_count cc
) as averageOrderValue
FROM products p
LEFT JOIN orders o ON p.pid = o.pid
WHERE o.date >= CURRENT_DATE - INTERVAL '30 days'
`); `);
// Ensure all values are numbers // Ensure all values are numbers
@@ -84,43 +80,53 @@ router.get('/profit', async (req, res) => {
JOIN category_path cp ON c.parent_id = cp.cat_id JOIN category_path cp ON c.parent_id = cp.cat_id
) )
SELECT SELECT
c.name as category, cm.category_name as category,
cp.path as categoryPath, COALESCE(cp.path, cm.category_name) as categorypath,
ROUND( cm.avg_margin_30d as profitmargin,
(SUM(o.price * o.quantity - p.cost_price * o.quantity) / cm.revenue_30d as revenue,
NULLIF(SUM(o.price * o.quantity), 0) * 100)::numeric, 1 cm.cogs_30d as cost
) as profitMargin, FROM category_metrics cm
ROUND(SUM(o.price * o.quantity)::numeric, 3) as revenue, LEFT JOIN category_path cp ON cm.category_id = cp.cat_id
ROUND(SUM(p.cost_price * o.quantity)::numeric, 3) as cost WHERE cm.revenue_30d > 0
FROM products p ORDER BY cm.revenue_30d DESC
LEFT JOIN orders o ON p.pid = o.pid
JOIN product_categories pc ON p.pid = pc.pid
JOIN categories c ON pc.cat_id = c.cat_id
JOIN category_path cp ON c.cat_id = cp.cat_id
WHERE o.date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY c.name, cp.path
ORDER BY profitMargin DESC
LIMIT 10 LIMIT 10
`); `);
// Get profit margin trend over time // Get profit margin over time
const { rows: overTime } = await pool.query(` const { rows: overTime } = await pool.query(`
SELECT WITH time_series AS (
to_char(o.date, 'YYYY-MM-DD') as date, SELECT
ROUND( date_trunc('day', generate_series(
(SUM(o.price * o.quantity - p.cost_price * o.quantity) / CURRENT_DATE - INTERVAL '30 days',
NULLIF(SUM(o.price * o.quantity), 0) * 100)::numeric, 1 CURRENT_DATE,
) as profitMargin, '1 day'::interval
ROUND(SUM(o.price * o.quantity)::numeric, 3) as revenue, ))::date AS date
ROUND(SUM(p.cost_price * o.quantity)::numeric, 3) as cost ),
FROM products p daily_profits AS (
LEFT JOIN orders o ON p.pid = o.pid SELECT
WHERE o.date >= CURRENT_DATE - INTERVAL '30 days' snapshot_date as date,
GROUP BY to_char(o.date, 'YYYY-MM-DD') SUM(net_revenue) as revenue,
ORDER BY date SUM(cogs) as cost,
CASE
WHEN SUM(net_revenue) > 0
THEN (SUM(net_revenue - cogs) / SUM(net_revenue)) * 100
ELSE 0
END as profit_margin
FROM daily_product_snapshots
WHERE snapshot_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY snapshot_date
)
SELECT
to_char(ts.date, 'YYYY-MM-DD') as date,
COALESCE(dp.profit_margin, 0) as profitmargin,
COALESCE(dp.revenue, 0) as revenue,
COALESCE(dp.cost, 0) as cost
FROM time_series ts
LEFT JOIN daily_profits dp ON ts.date = dp.date
ORDER BY ts.date
`); `);
// Get top performing products with category paths // Get top performing products by profit margin
const { rows: topProducts } = await pool.query(` const { rows: topProducts } = await pool.query(`
WITH RECURSIVE category_path AS ( WITH RECURSIVE category_path AS (
SELECT SELECT
@@ -140,26 +146,28 @@ router.get('/profit', async (req, res) => {
(cp.path || ' > ' || c.name)::text (cp.path || ' > ' || c.name)::text
FROM categories c FROM categories c
JOIN category_path cp ON c.parent_id = cp.cat_id JOIN category_path cp ON c.parent_id = cp.cat_id
),
product_categories AS (
SELECT
pc.pid,
c.name as category,
COALESCE(cp.path, c.name) as categorypath
FROM product_categories pc
JOIN categories c ON pc.cat_id = c.cat_id
LEFT JOIN category_path cp ON c.cat_id = cp.cat_id
) )
SELECT SELECT
p.title as product, pm.title as product,
c.name as category, COALESCE(pc.category, 'Uncategorized') as category,
cp.path as categoryPath, COALESCE(pc.categorypath, 'Uncategorized') as categorypath,
ROUND( pm.margin_30d as profitmargin,
(SUM(o.price * o.quantity - p.cost_price * o.quantity) / pm.revenue_30d as revenue,
NULLIF(SUM(o.price * o.quantity), 0) * 100)::numeric, 1 pm.cogs_30d as cost
) as profitMargin, FROM product_metrics pm
ROUND(SUM(o.price * o.quantity)::numeric, 3) as revenue, LEFT JOIN product_categories pc ON pm.pid = pc.pid
ROUND(SUM(p.cost_price * o.quantity)::numeric, 3) as cost WHERE pm.revenue_30d > 100
FROM products p AND pm.margin_30d > 0
LEFT JOIN orders o ON p.pid = o.pid ORDER BY pm.margin_30d DESC
JOIN product_categories pc ON p.pid = pc.pid
JOIN categories c ON pc.cat_id = c.cat_id
JOIN category_path cp ON c.cat_id = cp.cat_id
WHERE o.date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY p.pid, p.title, c.name, cp.path
HAVING SUM(o.price * o.quantity) > 0
ORDER BY profitMargin DESC
LIMIT 10 LIMIT 10
`); `);
@@ -184,93 +192,52 @@ router.get('/vendors', async (req, res) => {
console.log('Fetching vendor performance data...'); console.log('Fetching vendor performance data...');
// First check if we have any vendors with sales // Get vendor performance metrics from the vendor_metrics table
const { rows: [checkData] } = await pool.query(`
SELECT COUNT(DISTINCT p.vendor) as vendor_count,
COUNT(DISTINCT o.order_number) as order_count
FROM products p
LEFT JOIN orders o ON p.pid = o.pid
WHERE p.vendor IS NOT NULL
`);
console.log('Vendor data check:', checkData);
// Get vendor performance metrics
const { rows: rawPerformance } = await pool.query(` const { rows: rawPerformance } = await pool.query(`
WITH monthly_sales AS (
SELECT
p.vendor,
ROUND(SUM(CASE
WHEN o.date >= CURRENT_DATE - INTERVAL '30 days'
THEN o.price * o.quantity
ELSE 0
END)::numeric, 3) as current_month,
ROUND(SUM(CASE
WHEN o.date >= CURRENT_DATE - INTERVAL '60 days'
AND o.date < CURRENT_DATE - INTERVAL '30 days'
THEN o.price * o.quantity
ELSE 0
END)::numeric, 3) as previous_month
FROM products p
LEFT JOIN orders o ON p.pid = o.pid
WHERE p.vendor IS NOT NULL
AND o.date >= CURRENT_DATE - INTERVAL '60 days'
GROUP BY p.vendor
)
SELECT SELECT
p.vendor, vendor_name as vendor,
ROUND(SUM(o.price * o.quantity)::numeric, 3) as sales_volume, revenue_30d as sales_volume,
COALESCE(ROUND( avg_margin_30d as profit_margin,
(SUM(o.price * o.quantity - p.cost_price * o.quantity) / COALESCE(
NULLIF(SUM(o.price * o.quantity), 0) * 100)::numeric, 1 sales_30d / NULLIF(current_stock_units, 0),
), 0) as profit_margin, 0
COALESCE(ROUND( ) as stock_turnover,
(SUM(o.quantity) / NULLIF(AVG(p.stock_quantity), 0))::numeric, 1 product_count,
), 0) as stock_turnover, -- Use an estimate of growth based on 7-day vs 30-day revenue
COUNT(DISTINCT p.pid) as product_count, CASE
ROUND( WHEN revenue_30d > 0
((ms.current_month / NULLIF(ms.previous_month, 0)) - 1) * 100, THEN ((revenue_7d * 4.0) / revenue_30d - 1) * 100
1 ELSE 0
) as growth END as growth
FROM products p FROM vendor_metrics
LEFT JOIN orders o ON p.pid = o.pid WHERE revenue_30d > 0
LEFT JOIN monthly_sales ms ON p.vendor = ms.vendor ORDER BY revenue_30d DESC
WHERE p.vendor IS NOT NULL LIMIT 20
AND o.date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY p.vendor, ms.current_month, ms.previous_month
ORDER BY sales_volume DESC
LIMIT 10
`); `);
// Transform to camelCase properties for frontend consumption // Format the performance data
const performance = rawPerformance.map(item => ({ const performance = rawPerformance.map(vendor => ({
vendor: item.vendor, vendor: vendor.vendor,
salesVolume: Number(item.sales_volume) || 0, salesVolume: Number(vendor.sales_volume) || 0,
profitMargin: Number(item.profit_margin) || 0, profitMargin: Number(vendor.profit_margin) || 0,
stockTurnover: Number(item.stock_turnover) || 0, stockTurnover: Number(vendor.stock_turnover) || 0,
productCount: Number(item.product_count) || 0, productCount: Number(vendor.product_count) || 0,
growth: Number(item.growth) || 0 growth: Number(vendor.growth) || 0
})); }));
// Get vendor comparison metrics (sales per product vs margin) // Get vendor comparison metrics (sales per product vs margin)
const { rows: rawComparison } = await pool.query(` const { rows: rawComparison } = await pool.query(`
SELECT SELECT
p.vendor, vendor_name as vendor,
COALESCE(ROUND( CASE
SUM(o.price * o.quantity) / NULLIF(COUNT(DISTINCT p.pid), 0), WHEN active_product_count > 0
2 THEN revenue_30d / active_product_count
), 0) as sales_per_product, ELSE 0
COALESCE(ROUND( END as sales_per_product,
AVG((p.price - p.cost_price) / NULLIF(p.cost_price, 0) * 100), avg_margin_30d as average_margin,
2 product_count as size
), 0) as average_margin, FROM vendor_metrics
COUNT(DISTINCT p.pid) as size WHERE active_product_count > 0
FROM products p
LEFT JOIN orders o ON p.pid = o.pid
WHERE p.vendor IS NOT NULL
AND o.date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY p.vendor
HAVING COUNT(DISTINCT p.pid) > 0
ORDER BY sales_per_product DESC ORDER BY sales_per_product DESC
LIMIT 10 LIMIT 10
`); `);
@@ -294,58 +261,7 @@ router.get('/vendors', async (req, res) => {
}); });
} catch (error) { } catch (error) {
console.error('Error fetching vendor performance:', error); console.error('Error fetching vendor performance:', error);
console.error('Error details:', error.message); res.status(500).json({ error: 'Failed to fetch vendor performance data' });
// Return dummy data on error with complete structure
res.json({
performance: [
{
vendor: "Example Vendor 1",
salesVolume: 10000,
profitMargin: 25.5,
stockTurnover: 3.2,
productCount: 15,
growth: 12.3
},
{
vendor: "Example Vendor 2",
salesVolume: 8500,
profitMargin: 22.8,
stockTurnover: 2.9,
productCount: 12,
growth: 8.7
},
{
vendor: "Example Vendor 3",
salesVolume: 6200,
profitMargin: 19.5,
stockTurnover: 2.5,
productCount: 8,
growth: 5.2
}
],
comparison: [
{
vendor: "Example Vendor 1",
salesPerProduct: 650,
averageMargin: 35.2,
size: 15
},
{
vendor: "Example Vendor 2",
salesPerProduct: 710,
averageMargin: 28.5,
size: 12
},
{
vendor: "Example Vendor 3",
salesPerProduct: 770,
averageMargin: 22.8,
size: 8
}
],
trends: []
});
} }
}); });
@@ -353,108 +269,119 @@ router.get('/vendors', async (req, res) => {
router.get('/stock', async (req, res) => { router.get('/stock', async (req, res) => {
try { try {
const pool = req.app.locals.pool; const pool = req.app.locals.pool;
console.log('Fetching stock analysis data...');
// Get global configuration values // Use the new metrics tables to get data
const { rows: configs } = await pool.query(`
SELECT
st.low_stock_threshold,
tc.calculation_period_days as turnover_period
FROM stock_thresholds st
CROSS JOIN turnover_config tc
WHERE st.id = 1 AND tc.id = 1
`);
const config = configs[0] || {
low_stock_threshold: 5,
turnover_period: 30
};
// Get turnover by category // Get turnover by category
const { rows: turnoverByCategory } = await pool.query(` const { rows: turnoverByCategory } = await pool.query(`
SELECT WITH category_metrics_with_path AS (
c.name as category, WITH RECURSIVE category_path AS (
ROUND((SUM(o.quantity) / NULLIF(AVG(p.stock_quantity), 0))::numeric, 1) as turnoverRate, SELECT
ROUND(AVG(p.stock_quantity)::numeric, 0) as averageStock, c.cat_id,
SUM(o.quantity) as totalSales c.name,
FROM products p c.parent_id,
LEFT JOIN orders o ON p.pid = o.pid c.name::text as path
JOIN product_categories pc ON p.pid = pc.pid FROM categories c
JOIN categories c ON pc.cat_id = c.cat_id WHERE c.parent_id IS NULL
WHERE o.date >= CURRENT_DATE - INTERVAL '${config.turnover_period} days'
GROUP BY c.name UNION ALL
HAVING ROUND((SUM(o.quantity) / NULLIF(AVG(p.stock_quantity), 0))::numeric, 1) > 0
ORDER BY turnoverRate DESC SELECT
LIMIT 10 c.cat_id,
`); c.name,
c.parent_id,
// Get stock levels over time (cp.path || ' > ' || c.name)::text
const { rows: stockLevels } = await pool.query(` FROM categories c
SELECT JOIN category_path cp ON c.parent_id = cp.cat_id
to_char(o.date, 'YYYY-MM-DD') as date, )
SUM(CASE WHEN p.stock_quantity > $1 THEN 1 ELSE 0 END) as inStock,
SUM(CASE WHEN p.stock_quantity <= $1 AND p.stock_quantity > 0 THEN 1 ELSE 0 END) as lowStock,
SUM(CASE WHEN p.stock_quantity = 0 THEN 1 ELSE 0 END) as outOfStock
FROM products p
LEFT JOIN orders o ON p.pid = o.pid
WHERE o.date >= CURRENT_DATE - INTERVAL '${config.turnover_period} days'
GROUP BY to_char(o.date, 'YYYY-MM-DD')
ORDER BY date
`, [config.low_stock_threshold]);
// Get critical stock items
const { rows: criticalItems } = await pool.query(`
WITH product_thresholds AS (
SELECT SELECT
p.pid, cm.category_id,
COALESCE( cm.category_name,
(SELECT reorder_days cp.path as category_path,
FROM stock_thresholds st cm.current_stock_units,
WHERE st.vendor = p.vendor LIMIT 1), cm.sales_30d,
(SELECT reorder_days cm.stock_turn_30d
FROM stock_thresholds st FROM category_metrics cm
WHERE st.vendor IS NULL LIMIT 1), LEFT JOIN category_path cp ON cm.category_id = cp.cat_id
14 WHERE cm.sales_30d > 0
) as reorder_days
FROM products p
) )
SELECT SELECT
p.title as product, category_name as category,
p.SKU as sku, COALESCE(stock_turn_30d, 0) as turnoverRate,
p.stock_quantity as stockQuantity, current_stock_units as averageStock,
GREATEST(ROUND((AVG(o.quantity) * pt.reorder_days)::numeric), $1) as reorderPoint, sales_30d as totalSales
ROUND((SUM(o.quantity) / NULLIF(p.stock_quantity, 0))::numeric, 1) as turnoverRate, FROM category_metrics_with_path
CASE ORDER BY stock_turn_30d DESC NULLS LAST
WHEN p.stock_quantity = 0 THEN 0
ELSE ROUND((p.stock_quantity / NULLIF((SUM(o.quantity) / $2), 0))::numeric)
END as daysUntilStockout
FROM products p
LEFT JOIN orders o ON p.pid = o.pid
JOIN product_thresholds pt ON p.pid = pt.pid
WHERE o.date >= CURRENT_DATE - INTERVAL '${config.turnover_period} days'
AND p.managing_stock = true
GROUP BY p.pid, pt.reorder_days
HAVING
CASE
WHEN p.stock_quantity = 0 THEN 0
ELSE ROUND((p.stock_quantity / NULLIF((SUM(o.quantity) / $2), 0))::numeric)
END < $3
AND
CASE
WHEN p.stock_quantity = 0 THEN 0
ELSE ROUND((p.stock_quantity / NULLIF((SUM(o.quantity) / $2), 0))::numeric)
END >= 0
ORDER BY daysUntilStockout
LIMIT 10 LIMIT 10
`, [ `);
config.low_stock_threshold,
config.turnover_period, // Get stock levels over time (last 30 days)
config.turnover_period const { rows: stockLevels } = await pool.query(`
]); WITH date_range AS (
SELECT generate_series(
res.json({ turnoverByCategory, stockLevels, criticalItems }); CURRENT_DATE - INTERVAL '30 days',
CURRENT_DATE,
'1 day'::interval
)::date AS date
),
daily_stock_counts AS (
SELECT
snapshot_date,
COUNT(DISTINCT pid) as total_products,
COUNT(DISTINCT CASE WHEN eod_stock_quantity > 5 THEN pid END) as in_stock,
COUNT(DISTINCT CASE WHEN eod_stock_quantity <= 5 AND eod_stock_quantity > 0 THEN pid END) as low_stock,
COUNT(DISTINCT CASE WHEN eod_stock_quantity = 0 THEN pid END) as out_of_stock
FROM daily_product_snapshots
WHERE snapshot_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY snapshot_date
)
SELECT
to_char(dr.date, 'YYYY-MM-DD') as date,
COALESCE(dsc.in_stock, 0) as inStock,
COALESCE(dsc.low_stock, 0) as lowStock,
COALESCE(dsc.out_of_stock, 0) as outOfStock
FROM date_range dr
LEFT JOIN daily_stock_counts dsc ON dr.date = dsc.snapshot_date
ORDER BY dr.date
`);
// Get critical items (products that need reordering)
const { rows: criticalItems } = await pool.query(`
SELECT
pm.title as product,
pm.sku as sku,
pm.current_stock as stockQuantity,
COALESCE(pm.config_safety_stock, 0) as reorderPoint,
COALESCE(pm.stockturn_30d, 0) as turnoverRate,
CASE
WHEN pm.sales_velocity_daily > 0
THEN ROUND(pm.current_stock / pm.sales_velocity_daily)
ELSE 999
END as daysUntilStockout
FROM product_metrics pm
WHERE pm.is_visible = true
AND pm.is_replenishable = true
AND pm.sales_30d > 0
AND pm.current_stock <= pm.config_safety_stock * 2
ORDER BY
CASE
WHEN pm.sales_velocity_daily > 0
THEN pm.current_stock / pm.sales_velocity_daily
ELSE 999
END ASC,
pm.revenue_30d DESC
LIMIT 10
`);
res.json({
turnoverByCategory,
stockLevels,
criticalItems
});
} catch (error) { } catch (error) {
console.error('Error fetching stock analysis:', error); console.error('Error fetching stock analysis:', error);
res.status(500).json({ error: 'Failed to fetch stock analysis' }); res.status(500).json({ error: 'Failed to fetch stock analysis', details: error.message });
} }
}); });
@@ -685,99 +612,4 @@ router.get('/categories', async (req, res) => {
} }
}); });
// Forecast endpoint
router.get('/forecast', async (req, res) => {
try {
const { brand, startDate, endDate } = req.query;
const pool = req.app.locals.pool;
const [results] = await pool.query(`
WITH RECURSIVE category_path AS (
SELECT
c.cat_id,
c.name,
c.parent_id,
CAST(c.name AS CHAR(1000)) as path
FROM categories c
WHERE c.parent_id IS NULL
UNION ALL
SELECT
c.cat_id,
c.name,
c.parent_id,
CONCAT(cp.path, ' > ', c.name)
FROM categories c
JOIN category_path cp ON c.parent_id = cp.cat_id
),
category_metrics AS (
SELECT
c.cat_id,
c.name as category_name,
cp.path,
p.brand,
COUNT(DISTINCT p.pid) as num_products,
CAST(COALESCE(ROUND(SUM(o.quantity) / DATEDIFF(?, ?), 2), 0) AS DECIMAL(15,3)) as avg_daily_sales,
COALESCE(SUM(o.quantity), 0) as total_sold,
CAST(COALESCE(ROUND(SUM(o.quantity) / COUNT(DISTINCT p.pid), 2), 0) AS DECIMAL(15,3)) as avgTotalSold,
CAST(COALESCE(ROUND(AVG(o.price), 2), 0) AS DECIMAL(15,3)) as avg_price
FROM categories c
JOIN product_categories pc ON c.cat_id = pc.cat_id
JOIN products p ON pc.pid = p.pid
JOIN category_path cp ON c.cat_id = cp.cat_id
LEFT JOIN product_metrics pmet ON p.pid = pmet.pid
LEFT JOIN orders o ON p.pid = o.pid
AND o.date BETWEEN ? AND ?
AND o.canceled = false
WHERE p.brand = ?
AND pmet.first_received_date BETWEEN ? AND ?
GROUP BY c.cat_id, c.name, cp.path, p.brand
),
product_details AS (
SELECT
p.pid,
p.title,
p.SKU,
p.stock_quantity,
pc.cat_id,
pmet.first_received_date,
COALESCE(SUM(o.quantity), 0) as total_sold,
CAST(COALESCE(ROUND(AVG(o.price), 2), 0) AS DECIMAL(15,3)) as avg_price
FROM products p
JOIN product_categories pc ON p.pid = pc.pid
JOIN product_metrics pmet ON p.pid = pmet.pid
LEFT JOIN orders o ON p.pid = o.pid
AND o.date BETWEEN ? AND ?
AND o.canceled = false
WHERE p.brand = ?
AND pmet.first_received_date BETWEEN ? AND ?
GROUP BY p.pid, p.title, p.SKU, p.stock_quantity, pc.cat_id, pmet.first_received_date
)
SELECT
cm.*,
JSON_ARRAYAGG(
JSON_OBJECT(
'pid', pd.pid,
'title', pd.title,
'SKU', pd.SKU,
'stock_quantity', pd.stock_quantity,
'total_sold', pd.total_sold,
'avg_price', pd.avg_price,
'first_received_date', DATE_FORMAT(pd.first_received_date, '%Y-%m-%d')
)
) as products
FROM category_metrics cm
JOIN product_details pd ON cm.cat_id = pd.cat_id
GROUP BY cm.cat_id, cm.category_name, cm.path, cm.brand, cm.num_products, cm.avg_daily_sales, cm.total_sold, cm.avgTotalSold, cm.avg_price
ORDER BY cm.total_sold DESC
`, [endDate, startDate, startDate, endDate, brand, startDate, endDate, startDate, endDate, brand, startDate, endDate]);
res.json(results);
} catch (error) {
console.error('Error fetching forecast data:', error);
res.status(500).json({ error: 'Failed to fetch forecast data' });
}
});
module.exports = router; module.exports = router;

View File

@@ -321,169 +321,5 @@ router.post('/vendors/:vendor/reset', async (req, res) => {
} }
}); });
// ===== LEGACY ENDPOINTS =====
// These are kept for backward compatibility but will be removed in future versions
// Get all configuration values
router.get('/', async (req, res) => {
const pool = req.app.locals.pool;
try {
console.log('[Config Route] Fetching configuration values...');
const { rows: stockThresholds } = await pool.query('SELECT * FROM stock_thresholds WHERE id = 1');
console.log('[Config Route] Stock thresholds:', stockThresholds);
const { rows: leadTimeThresholds } = await pool.query('SELECT * FROM lead_time_thresholds WHERE id = 1');
console.log('[Config Route] Lead time thresholds:', leadTimeThresholds);
const { rows: salesVelocityConfig } = await pool.query('SELECT * FROM sales_velocity_config WHERE id = 1');
console.log('[Config Route] Sales velocity config:', salesVelocityConfig);
const { rows: abcConfig } = await pool.query('SELECT * FROM abc_classification_config WHERE id = 1');
console.log('[Config Route] ABC config:', abcConfig);
const { rows: safetyStockConfig } = await pool.query('SELECT * FROM safety_stock_config WHERE id = 1');
console.log('[Config Route] Safety stock config:', safetyStockConfig);
const { rows: turnoverConfig } = await pool.query('SELECT * FROM turnover_config WHERE id = 1');
console.log('[Config Route] Turnover config:', turnoverConfig);
const response = {
stockThresholds: stockThresholds[0],
leadTimeThresholds: leadTimeThresholds[0],
salesVelocityConfig: salesVelocityConfig[0],
abcConfig: abcConfig[0],
safetyStockConfig: safetyStockConfig[0],
turnoverConfig: turnoverConfig[0]
};
console.log('[Config Route] Sending response:', response);
res.json(response);
} catch (error) {
console.error('[Config Route] Error fetching configuration:', error);
res.status(500).json({ error: 'Failed to fetch configuration', details: error.message });
}
});
// Update stock thresholds
router.put('/stock-thresholds/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { critical_days, reorder_days, overstock_days, low_stock_threshold, min_reorder_quantity } = req.body;
const { rows } = await pool.query(
`UPDATE stock_thresholds
SET critical_days = $1,
reorder_days = $2,
overstock_days = $3,
low_stock_threshold = $4,
min_reorder_quantity = $5
WHERE id = $6`,
[critical_days, reorder_days, overstock_days, low_stock_threshold, min_reorder_quantity, req.params.id]
);
res.json({ success: true });
} catch (error) {
console.error('[Config Route] Error updating stock thresholds:', error);
res.status(500).json({ error: 'Failed to update stock thresholds' });
}
});
// Update lead time thresholds
router.put('/lead-time-thresholds/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { target_days, warning_days, critical_days } = req.body;
const { rows } = await pool.query(
`UPDATE lead_time_thresholds
SET target_days = $1,
warning_days = $2,
critical_days = $3
WHERE id = $4`,
[target_days, warning_days, critical_days, req.params.id]
);
res.json({ success: true });
} catch (error) {
console.error('[Config Route] Error updating lead time thresholds:', error);
res.status(500).json({ error: 'Failed to update lead time thresholds' });
}
});
// Update sales velocity config
router.put('/sales-velocity/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { daily_window_days, weekly_window_days, monthly_window_days } = req.body;
const { rows } = await pool.query(
`UPDATE sales_velocity_config
SET daily_window_days = $1,
weekly_window_days = $2,
monthly_window_days = $3
WHERE id = $4`,
[daily_window_days, weekly_window_days, monthly_window_days, req.params.id]
);
res.json({ success: true });
} catch (error) {
console.error('[Config Route] Error updating sales velocity config:', error);
res.status(500).json({ error: 'Failed to update sales velocity config' });
}
});
// Update ABC classification config
router.put('/abc-classification/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { a_threshold, b_threshold, classification_period_days } = req.body;
const { rows } = await pool.query(
`UPDATE abc_classification_config
SET a_threshold = $1,
b_threshold = $2,
classification_period_days = $3
WHERE id = $4`,
[a_threshold, b_threshold, classification_period_days, req.params.id]
);
res.json({ success: true });
} catch (error) {
console.error('[Config Route] Error updating ABC classification config:', error);
res.status(500).json({ error: 'Failed to update ABC classification config' });
}
});
// Update safety stock config
router.put('/safety-stock/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { coverage_days, service_level } = req.body;
const { rows } = await pool.query(
`UPDATE safety_stock_config
SET coverage_days = $1,
service_level = $2
WHERE id = $3`,
[coverage_days, service_level, req.params.id]
);
res.json({ success: true });
} catch (error) {
console.error('[Config Route] Error updating safety stock config:', error);
res.status(500).json({ error: 'Failed to update safety stock config' });
}
});
// Update turnover config
router.put('/turnover/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { calculation_period_days, target_rate } = req.body;
const { rows } = await pool.query(
`UPDATE turnover_config
SET calculation_period_days = $1,
target_rate = $2
WHERE id = $3`,
[calculation_period_days, target_rate, req.params.id]
);
res.json({ success: true });
} catch (error) {
console.error('[Config Route] Error updating turnover config:', error);
res.status(500).json({ error: 'Failed to update turnover config' });
}
});
// Export the router // Export the router
module.exports = router; module.exports = router;

View File

@@ -1,881 +0,0 @@
const express = require('express');
const router = express.Router();
const { spawn } = require('child_process');
const path = require('path');
const db = require('../utils/db');
// Debug middleware MUST be first
router.use((req, res, next) => {
console.log(`[CSV Route Debug] ${req.method} ${req.path}`);
next();
});
// Store active processes and their progress
let activeImport = null;
let importProgress = null;
let activeFullUpdate = null;
let activeFullReset = null;
// SSE clients for progress updates
const updateClients = new Set();
const importClients = new Set();
const resetClients = new Set();
const resetMetricsClients = new Set();
const calculateMetricsClients = new Set();
const fullUpdateClients = new Set();
const fullResetClients = new Set();
// Helper to send progress to specific clients
function sendProgressToClients(clients, data) {
// If data is a string, send it directly
// If it's an object, convert it to JSON
const message = typeof data === 'string'
? `data: ${data}\n\n`
: `data: ${JSON.stringify(data)}\n\n`;
clients.forEach(client => {
try {
client.write(message);
// Immediately flush the response
if (typeof client.flush === 'function') {
client.flush();
}
} catch (error) {
// Silently remove failed client
clients.delete(client);
}
});
}
// Helper to run a script and stream progress
function runScript(scriptPath, type, clients) {
return new Promise((resolve, reject) => {
// Kill any existing process of this type
let activeProcess;
switch (type) {
case 'update':
if (activeFullUpdate) {
try { activeFullUpdate.kill(); } catch (e) { }
}
activeProcess = activeFullUpdate;
break;
case 'reset':
if (activeFullReset) {
try { activeFullReset.kill(); } catch (e) { }
}
activeProcess = activeFullReset;
break;
}
const child = spawn('node', [scriptPath], {
stdio: ['inherit', 'pipe', 'pipe']
});
switch (type) {
case 'update':
activeFullUpdate = child;
break;
case 'reset':
activeFullReset = child;
break;
}
let output = '';
child.stdout.on('data', (data) => {
const text = data.toString();
output += text;
// Split by lines to handle multiple JSON outputs
const lines = text.split('\n');
lines.filter(line => line.trim()).forEach(line => {
try {
// Try to parse as JSON but don't let it affect the display
const jsonData = JSON.parse(line);
// Only end the process if we get a final status
if (jsonData.status === 'complete' || jsonData.status === 'error' || jsonData.status === 'cancelled') {
if (jsonData.status === 'complete' && !jsonData.operation?.includes('complete')) {
// Don't close for intermediate completion messages
sendProgressToClients(clients, line);
return;
}
// Close only on final completion/error/cancellation
switch (type) {
case 'update':
activeFullUpdate = null;
break;
case 'reset':
activeFullReset = null;
break;
}
if (jsonData.status === 'error') {
reject(new Error(jsonData.error || 'Unknown error'));
} else {
resolve({ output });
}
}
} catch (e) {
// Not JSON, just display as is
}
// Always send the raw line
sendProgressToClients(clients, line);
});
});
child.stderr.on('data', (data) => {
const text = data.toString();
console.error(text);
// Send stderr output directly too
sendProgressToClients(clients, text);
});
child.on('close', (code) => {
switch (type) {
case 'update':
activeFullUpdate = null;
break;
case 'reset':
activeFullReset = null;
break;
}
if (code !== 0) {
const error = `Script ${scriptPath} exited with code ${code}`;
sendProgressToClients(clients, error);
reject(new Error(error));
}
// Don't resolve here - let the completion message from the script trigger the resolve
});
child.on('error', (err) => {
switch (type) {
case 'update':
activeFullUpdate = null;
break;
case 'reset':
activeFullReset = null;
break;
}
sendProgressToClients(clients, err.message);
reject(err);
});
});
}
// Progress endpoints
router.get('/:type/progress', (req, res) => {
const { type } = req.params;
if (!['update', 'reset'].includes(type)) {
return res.status(400).json({ error: 'Invalid operation type' });
}
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Access-Control-Allow-Origin': req.headers.origin || '*',
'Access-Control-Allow-Credentials': 'true'
});
// Add this client to the correct set
const clients = type === 'update' ? fullUpdateClients : fullResetClients;
clients.add(res);
// Send initial connection message
sendProgressToClients(new Set([res]), JSON.stringify({
status: 'running',
operation: 'Initializing connection...'
}));
// Handle client disconnect
req.on('close', () => {
clients.delete(res);
});
});
// Debug endpoint to verify route registration
router.get('/test', (req, res) => {
console.log('CSV test endpoint hit');
res.json({ message: 'CSV routes are working' });
});
// Route to check import status
router.get('/status', (req, res) => {
console.log('CSV status endpoint hit');
res.json({
active: !!activeImport,
progress: importProgress
});
});
// Add calculate-metrics status endpoint
router.get('/calculate-metrics/status', (req, res) => {
const calculateMetrics = require('../../scripts/calculate-metrics');
const progress = calculateMetrics.getProgress();
// Only consider it active if both the process is running and we have progress
const isActive = !!activeImport && !!progress;
res.json({
active: isActive,
progress: isActive ? progress : null
});
});
// Route to update CSV files
router.post('/update', async (req, res, next) => {
if (activeImport) {
return res.status(409).json({ error: 'Import already in progress' });
}
try {
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'update-csv.js');
if (!require('fs').existsSync(scriptPath)) {
return res.status(500).json({ error: 'Update script not found' });
}
activeImport = spawn('node', [scriptPath]);
activeImport.stdout.on('data', (data) => {
const output = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(output);
sendProgressToClients(updateClients, {
status: 'running',
...jsonData
});
} catch (e) {
// If not JSON, send as plain progress
sendProgressToClients(updateClients, {
status: 'running',
progress: output
});
}
});
activeImport.stderr.on('data', (data) => {
const error = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(error);
sendProgressToClients(updateClients, {
status: 'error',
...jsonData
});
} catch {
sendProgressToClients(updateClients, {
status: 'error',
error
});
}
});
await new Promise((resolve, reject) => {
activeImport.on('close', (code) => {
// Don't treat cancellation (code 143/SIGTERM) as an error
if (code === 0 || code === 143) {
sendProgressToClients(updateClients, {
status: 'complete',
operation: code === 143 ? 'Operation cancelled' : 'Update complete'
});
resolve();
} else {
const errorMsg = `Update process exited with code ${code}`;
sendProgressToClients(updateClients, {
status: 'error',
error: errorMsg
});
reject(new Error(errorMsg));
}
activeImport = null;
importProgress = null;
});
});
res.json({ success: true });
} catch (error) {
console.error('Error updating CSV files:', error);
activeImport = null;
importProgress = null;
sendProgressToClients(updateClients, {
status: 'error',
error: error.message
});
next(error);
}
});
// Route to import CSV files
router.post('/import', async (req, res) => {
if (activeImport) {
return res.status(409).json({ error: 'Import already in progress' });
}
try {
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'import-csv.js');
if (!require('fs').existsSync(scriptPath)) {
return res.status(500).json({ error: 'Import script not found' });
}
// Get test limits from request body
const { products = 0, orders = 10000, purchaseOrders = 10000 } = req.body;
// Create environment variables for the script
const env = {
...process.env,
PRODUCTS_TEST_LIMIT: products.toString(),
ORDERS_TEST_LIMIT: orders.toString(),
PURCHASE_ORDERS_TEST_LIMIT: purchaseOrders.toString()
};
activeImport = spawn('node', [scriptPath], { env });
activeImport.stdout.on('data', (data) => {
const output = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(output);
sendProgressToClients(importClients, {
status: 'running',
...jsonData
});
} catch {
// If not JSON, send as plain progress
sendProgressToClients(importClients, {
status: 'running',
progress: output
});
}
});
activeImport.stderr.on('data', (data) => {
const error = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(error);
sendProgressToClients(importClients, {
status: 'error',
...jsonData
});
} catch {
sendProgressToClients(importClients, {
status: 'error',
error
});
}
});
await new Promise((resolve, reject) => {
activeImport.on('close', (code) => {
// Don't treat cancellation (code 143/SIGTERM) as an error
if (code === 0 || code === 143) {
sendProgressToClients(importClients, {
status: 'complete',
operation: code === 143 ? 'Operation cancelled' : 'Import complete'
});
resolve();
} else {
sendProgressToClients(importClients, {
status: 'error',
error: `Process exited with code ${code}`
});
reject(new Error(`Import process exited with code ${code}`));
}
activeImport = null;
importProgress = null;
});
});
res.json({ success: true });
} catch (error) {
console.error('Error importing CSV files:', error);
activeImport = null;
importProgress = null;
sendProgressToClients(importClients, {
status: 'error',
error: error.message
});
res.status(500).json({ error: 'Failed to import CSV files', details: error.message });
}
});
// Route to cancel active process
router.post('/cancel', (req, res) => {
let killed = false;
// Get the operation type from the request
const { type } = req.query;
const clients = type === 'update' ? fullUpdateClients : fullResetClients;
const activeProcess = type === 'update' ? activeFullUpdate : activeFullReset;
if (activeProcess) {
try {
activeProcess.kill('SIGTERM');
if (type === 'update') {
activeFullUpdate = null;
} else {
activeFullReset = null;
}
killed = true;
sendProgressToClients(clients, JSON.stringify({
status: 'cancelled',
operation: 'Operation cancelled'
}));
} catch (err) {
console.error(`Error killing ${type} process:`, err);
}
}
if (killed) {
res.json({ success: true });
} else {
res.status(404).json({ error: 'No active process to cancel' });
}
});
// Route to reset database
router.post('/reset', async (req, res) => {
if (activeImport) {
return res.status(409).json({ error: 'Import already in progress' });
}
try {
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'reset-db.js');
if (!require('fs').existsSync(scriptPath)) {
return res.status(500).json({ error: 'Reset script not found' });
}
activeImport = spawn('node', [scriptPath]);
activeImport.stdout.on('data', (data) => {
const output = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(output);
sendProgressToClients(resetClients, {
status: 'running',
...jsonData
});
} catch (e) {
// If not JSON, send as plain progress
sendProgressToClients(resetClients, {
status: 'running',
progress: output
});
}
});
activeImport.stderr.on('data', (data) => {
const error = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(error);
sendProgressToClients(resetClients, {
status: 'error',
...jsonData
});
} catch {
sendProgressToClients(resetClients, {
status: 'error',
error
});
}
});
await new Promise((resolve, reject) => {
activeImport.on('close', (code) => {
// Don't treat cancellation (code 143/SIGTERM) as an error
if (code === 0 || code === 143) {
sendProgressToClients(resetClients, {
status: 'complete',
operation: code === 143 ? 'Operation cancelled' : 'Reset complete'
});
resolve();
} else {
const errorMsg = `Reset process exited with code ${code}`;
sendProgressToClients(resetClients, {
status: 'error',
error: errorMsg
});
reject(new Error(errorMsg));
}
activeImport = null;
importProgress = null;
});
});
res.json({ success: true });
} catch (error) {
console.error('Error resetting database:', error);
activeImport = null;
importProgress = null;
sendProgressToClients(resetClients, {
status: 'error',
error: error.message
});
res.status(500).json({ error: 'Failed to reset database', details: error.message });
}
});
// Add reset-metrics endpoint
router.post('/reset-metrics', async (req, res) => {
if (activeImport) {
res.status(400).json({ error: 'Operation already in progress' });
return;
}
try {
// Set active import to prevent concurrent operations
activeImport = {
type: 'reset-metrics',
status: 'running',
operation: 'Starting metrics reset'
};
// Send initial response
res.status(200).json({ message: 'Reset metrics started' });
// Send initial progress through SSE
sendProgressToClients(resetMetricsClients, {
status: 'running',
operation: 'Starting metrics reset'
});
// Run the reset metrics script
const resetMetrics = require('../../scripts/reset-metrics');
await resetMetrics();
// Send completion through SSE
sendProgressToClients(resetMetricsClients, {
status: 'complete',
operation: 'Metrics reset completed'
});
activeImport = null;
} catch (error) {
console.error('Error during metrics reset:', error);
// Send error through SSE
sendProgressToClients(resetMetricsClients, {
status: 'error',
error: error.message || 'Failed to reset metrics'
});
activeImport = null;
res.status(500).json({ error: error.message || 'Failed to reset metrics' });
}
});
// Add calculate-metrics endpoint
router.post('/calculate-metrics', async (req, res) => {
if (activeImport) {
return res.status(409).json({ error: 'Import already in progress' });
}
try {
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'calculate-metrics.js');
if (!require('fs').existsSync(scriptPath)) {
return res.status(500).json({ error: 'Calculate metrics script not found' });
}
activeImport = spawn('node', [scriptPath]);
let wasCancelled = false;
activeImport.stdout.on('data', (data) => {
const output = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(output);
importProgress = {
status: 'running',
...jsonData.progress
};
sendProgressToClients(calculateMetricsClients, importProgress);
} catch (e) {
// If not JSON, send as plain progress
importProgress = {
status: 'running',
progress: output
};
sendProgressToClients(calculateMetricsClients, importProgress);
}
});
activeImport.stderr.on('data', (data) => {
if (wasCancelled) return; // Don't send errors if cancelled
const error = data.toString().trim();
try {
// Try to parse as JSON
const jsonData = JSON.parse(error);
importProgress = {
status: 'error',
...jsonData.progress
};
sendProgressToClients(calculateMetricsClients, importProgress);
} catch {
importProgress = {
status: 'error',
error
};
sendProgressToClients(calculateMetricsClients, importProgress);
}
});
await new Promise((resolve, reject) => {
activeImport.on('close', (code, signal) => {
wasCancelled = signal === 'SIGTERM' || code === 143;
activeImport = null;
if (code === 0 || wasCancelled) {
if (wasCancelled) {
importProgress = {
status: 'cancelled',
operation: 'Operation cancelled'
};
sendProgressToClients(calculateMetricsClients, importProgress);
} else {
importProgress = {
status: 'complete',
operation: 'Metrics calculation complete'
};
sendProgressToClients(calculateMetricsClients, importProgress);
}
resolve();
} else {
importProgress = null;
reject(new Error(`Metrics calculation process exited with code ${code}`));
}
});
});
res.json({ success: true });
} catch (error) {
console.error('Error calculating metrics:', error);
activeImport = null;
importProgress = null;
// Only send error if it wasn't a cancellation
if (!error.message?.includes('code 143') && !error.message?.includes('SIGTERM')) {
sendProgressToClients(calculateMetricsClients, {
status: 'error',
error: error.message
});
res.status(500).json({ error: 'Failed to calculate metrics', details: error.message });
} else {
res.json({ success: true });
}
}
});
// Route to import from production database
router.post('/import-from-prod', async (req, res) => {
if (activeImport) {
return res.status(409).json({ error: 'Import already in progress' });
}
try {
const importFromProd = require('../../scripts/import-from-prod');
// Set up progress handler
const progressHandler = (data) => {
importProgress = data;
sendProgressToClients(importClients, data);
};
// Start the import process
importFromProd.outputProgress = progressHandler;
activeImport = importFromProd; // Store the module for cancellation
// Run the import in the background
importFromProd.main().catch(error => {
console.error('Error in import process:', error);
activeImport = null;
importProgress = {
status: error.message === 'Import cancelled' ? 'cancelled' : 'error',
operation: 'Import process',
error: error.message
};
sendProgressToClients(importClients, importProgress);
}).finally(() => {
activeImport = null;
});
res.json({ message: 'Import from production started' });
} catch (error) {
console.error('Error starting production import:', error);
activeImport = null;
res.status(500).json({ error: error.message || 'Failed to start production import' });
}
});
// POST /csv/full-update - Run full update script
router.post('/full-update', async (req, res) => {
try {
const scriptPath = path.join(__dirname, '../../scripts/full-update.js');
runScript(scriptPath, 'update', fullUpdateClients)
.catch(error => {
console.error('Update failed:', error);
});
res.status(202).json({ message: 'Update started' });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// POST /csv/full-reset - Run full reset script
router.post('/full-reset', async (req, res) => {
try {
const scriptPath = path.join(__dirname, '../../scripts/full-reset.js');
runScript(scriptPath, 'reset', fullResetClients)
.catch(error => {
console.error('Reset failed:', error);
});
res.status(202).json({ message: 'Reset started' });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// GET /history/import - Get recent import history
router.get('/history/import', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
id,
start_time,
end_time,
status,
error_message,
records_added::integer,
records_updated::integer
FROM import_history
ORDER BY start_time DESC
LIMIT 20
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching import history:', error);
res.status(500).json({ error: error.message });
}
});
// GET /history/calculate - Get recent calculation history
router.get('/history/calculate', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
id,
start_time,
end_time,
duration_minutes,
status,
error_message,
total_products,
total_orders,
total_purchase_orders,
processed_products,
processed_orders,
processed_purchase_orders,
additional_info
FROM calculate_history
ORDER BY start_time DESC
LIMIT 20
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching calculate history:', error);
res.status(500).json({ error: error.message });
}
});
// GET /status/modules - Get module calculation status
router.get('/status/modules', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
module_name,
last_calculation_timestamp::timestamp
FROM calculate_status
ORDER BY module_name
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching module status:', error);
res.status(500).json({ error: error.message });
}
});
// GET /status/tables - Get table sync status
router.get('/status/tables', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
table_name,
last_sync_timestamp::timestamp
FROM sync_status
ORDER BY table_name
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching table status:', error);
res.status(500).json({ error: error.message });
}
});
// GET /status/table-counts - Get record counts for all tables
router.get('/status/table-counts', async (req, res) => {
try {
const pool = req.app.locals.pool;
const tables = [
// Core tables
'products', 'categories', 'product_categories', 'orders', 'purchase_orders',
// New metrics tables
'product_metrics', 'daily_product_snapshots',
// Config tables
'settings_global', 'settings_vendor', 'settings_product'
];
const counts = await Promise.all(
tables.map(table =>
pool.query(`SELECT COUNT(*) as count FROM ${table}`)
.then(result => ({
table_name: table,
count: parseInt(result.rows[0].count)
}))
.catch(err => ({
table_name: table,
count: null,
error: err.message
}))
)
);
// Group tables by type
const groupedCounts = {
core: counts.filter(c => ['products', 'categories', 'product_categories', 'orders', 'purchase_orders'].includes(c.table_name)),
metrics: counts.filter(c => ['product_metrics', 'daily_product_snapshots'].includes(c.table_name)),
config: counts.filter(c => ['settings_global', 'settings_vendor', 'settings_product'].includes(c.table_name))
};
res.json(groupedCounts);
} catch (error) {
console.error('Error fetching table counts:', error);
res.status(500).json({ error: error.message });
}
});
module.exports = router;

View File

@@ -22,11 +22,11 @@ router.get('/stock/metrics', async (req, res) => {
const { rows: [stockMetrics] } = await executeQuery(` const { rows: [stockMetrics] } = await executeQuery(`
SELECT SELECT
COALESCE(COUNT(*), 0)::integer as total_products, COALESCE(COUNT(*), 0)::integer as total_products,
COALESCE(COUNT(CASE WHEN stock_quantity > 0 THEN 1 END), 0)::integer as products_in_stock, COALESCE(COUNT(CASE WHEN current_stock > 0 THEN 1 END), 0)::integer as products_in_stock,
COALESCE(SUM(CASE WHEN stock_quantity > 0 THEN stock_quantity END), 0)::integer as total_units, COALESCE(SUM(CASE WHEN current_stock > 0 THEN current_stock END), 0)::integer as total_units,
ROUND(COALESCE(SUM(CASE WHEN stock_quantity > 0 THEN stock_quantity * cost_price END), 0)::numeric, 3) as total_cost, ROUND(COALESCE(SUM(CASE WHEN current_stock > 0 THEN current_stock_cost END), 0)::numeric, 3) as total_cost,
ROUND(COALESCE(SUM(CASE WHEN stock_quantity > 0 THEN stock_quantity * price END), 0)::numeric, 3) as total_retail ROUND(COALESCE(SUM(CASE WHEN current_stock > 0 THEN current_stock_retail END), 0)::numeric, 3) as total_retail
FROM products FROM product_metrics
`); `);
console.log('Raw stockMetrics from database:', stockMetrics); console.log('Raw stockMetrics from database:', stockMetrics);
@@ -42,13 +42,13 @@ router.get('/stock/metrics', async (req, res) => {
SELECT SELECT
COALESCE(brand, 'Unbranded') as brand, COALESCE(brand, 'Unbranded') as brand,
COUNT(DISTINCT pid)::integer as variant_count, COUNT(DISTINCT pid)::integer as variant_count,
COALESCE(SUM(stock_quantity), 0)::integer as stock_units, COALESCE(SUM(current_stock), 0)::integer as stock_units,
ROUND(COALESCE(SUM(stock_quantity * cost_price), 0)::numeric, 3) as stock_cost, ROUND(COALESCE(SUM(current_stock_cost), 0)::numeric, 3) as stock_cost,
ROUND(COALESCE(SUM(stock_quantity * price), 0)::numeric, 3) as stock_retail ROUND(COALESCE(SUM(current_stock_retail), 0)::numeric, 3) as stock_retail
FROM products FROM product_metrics
WHERE stock_quantity > 0 WHERE current_stock > 0
GROUP BY COALESCE(brand, 'Unbranded') GROUP BY COALESCE(brand, 'Unbranded')
HAVING ROUND(COALESCE(SUM(stock_quantity * cost_price), 0)::numeric, 3) > 0 HAVING ROUND(COALESCE(SUM(current_stock_cost), 0)::numeric, 3) > 0
), ),
other_brands AS ( other_brands AS (
SELECT SELECT
@@ -130,11 +130,11 @@ router.get('/purchase/metrics', async (req, res) => {
END), 0)::numeric, 3) as total_cost, END), 0)::numeric, 3) as total_cost,
ROUND(COALESCE(SUM(CASE ROUND(COALESCE(SUM(CASE
WHEN po.receiving_status NOT IN ('partial_received', 'full_received', 'paid') WHEN po.receiving_status NOT IN ('partial_received', 'full_received', 'paid')
THEN po.ordered * p.price THEN po.ordered * pm.current_price
ELSE 0 ELSE 0
END), 0)::numeric, 3) as total_retail END), 0)::numeric, 3) as total_retail
FROM purchase_orders po FROM purchase_orders po
JOIN products p ON po.pid = p.pid JOIN product_metrics pm ON po.pid = pm.pid
`); `);
const { rows: vendorOrders } = await executeQuery(` const { rows: vendorOrders } = await executeQuery(`
@@ -143,9 +143,9 @@ router.get('/purchase/metrics', async (req, res) => {
COUNT(DISTINCT po.po_id)::integer as orders, COUNT(DISTINCT po.po_id)::integer as orders,
COALESCE(SUM(po.ordered), 0)::integer as units, COALESCE(SUM(po.ordered), 0)::integer as units,
ROUND(COALESCE(SUM(po.ordered * po.cost_price), 0)::numeric, 3) as cost, ROUND(COALESCE(SUM(po.ordered * po.cost_price), 0)::numeric, 3) as cost,
ROUND(COALESCE(SUM(po.ordered * p.price), 0)::numeric, 3) as retail ROUND(COALESCE(SUM(po.ordered * pm.current_price), 0)::numeric, 3) as retail
FROM purchase_orders po FROM purchase_orders po
JOIN products p ON po.pid = p.pid JOIN product_metrics pm ON po.pid = pm.pid
WHERE po.receiving_status NOT IN ('partial_received', 'full_received', 'paid') WHERE po.receiving_status NOT IN ('partial_received', 'full_received', 'paid')
GROUP BY po.vendor GROUP BY po.vendor
HAVING ROUND(COALESCE(SUM(po.ordered * po.cost_price), 0)::numeric, 3) > 0 HAVING ROUND(COALESCE(SUM(po.ordered * po.cost_price), 0)::numeric, 3) > 0
@@ -223,54 +223,35 @@ router.get('/replenishment/metrics', async (req, res) => {
// Get summary metrics // Get summary metrics
const { rows: [metrics] } = await executeQuery(` const { rows: [metrics] } = await executeQuery(`
SELECT SELECT
COUNT(DISTINCT p.pid)::integer as products_to_replenish, COUNT(DISTINCT pm.pid)::integer as products_to_replenish,
COALESCE(SUM(CASE COALESCE(SUM(pm.replenishment_units), 0)::integer as total_units_needed,
WHEN p.stock_quantity < 0 THEN ABS(p.stock_quantity) + pm.reorder_qty ROUND(COALESCE(SUM(pm.replenishment_cost), 0)::numeric, 3) as total_cost,
ELSE pm.reorder_qty ROUND(COALESCE(SUM(pm.replenishment_retail), 0)::numeric, 3) as total_retail
END), 0)::integer as total_units_needed, FROM product_metrics pm
ROUND(COALESCE(SUM(CASE WHERE pm.is_replenishable = true
WHEN p.stock_quantity < 0 THEN (ABS(p.stock_quantity) + pm.reorder_qty) * p.cost_price AND (pm.status IN ('Critical', 'Reorder')
ELSE pm.reorder_qty * p.cost_price OR pm.current_stock < 0)
END), 0)::numeric, 3) as total_cost, AND pm.replenishment_units > 0
ROUND(COALESCE(SUM(CASE
WHEN p.stock_quantity < 0 THEN (ABS(p.stock_quantity) + pm.reorder_qty) * p.price
ELSE pm.reorder_qty * p.price
END), 0)::numeric, 3) as total_retail
FROM products p
JOIN product_metrics pm ON p.pid = pm.pid
WHERE p.replenishable = true
AND (pm.stock_status IN ('Critical', 'Reorder')
OR p.stock_quantity < 0)
AND pm.reorder_qty > 0
`); `);
// Get top variants to replenish // Get top variants to replenish
const { rows: variants } = await executeQuery(` const { rows: variants } = await executeQuery(`
SELECT SELECT
p.pid, pm.pid,
p.title, pm.title,
p.stock_quantity::integer as current_stock, pm.current_stock::integer as current_stock,
CASE pm.replenishment_units::integer as replenish_qty,
WHEN p.stock_quantity < 0 THEN ABS(p.stock_quantity) + pm.reorder_qty ROUND(pm.replenishment_cost::numeric, 3) as replenish_cost,
ELSE pm.reorder_qty ROUND(pm.replenishment_retail::numeric, 3) as replenish_retail,
END::integer as replenish_qty, pm.status,
ROUND(CASE pm.planning_period_days::text as planning_period
WHEN p.stock_quantity < 0 THEN (ABS(p.stock_quantity) + pm.reorder_qty) * p.cost_price FROM product_metrics pm
ELSE pm.reorder_qty * p.cost_price WHERE pm.is_replenishable = true
END::numeric, 3) as replenish_cost, AND (pm.status IN ('Critical', 'Reorder')
ROUND(CASE OR pm.current_stock < 0)
WHEN p.stock_quantity < 0 THEN (ABS(p.stock_quantity) + pm.reorder_qty) * p.price AND pm.replenishment_units > 0
ELSE pm.reorder_qty * p.price
END::numeric, 3) as replenish_retail,
pm.stock_status
FROM products p
JOIN product_metrics pm ON p.pid = pm.pid
WHERE p.replenishable = true
AND (pm.stock_status IN ('Critical', 'Reorder')
OR p.stock_quantity < 0)
AND pm.reorder_qty > 0
ORDER BY ORDER BY
CASE pm.stock_status CASE pm.status
WHEN 'Critical' THEN 1 WHEN 'Critical' THEN 1
WHEN 'Reorder' THEN 2 WHEN 'Reorder' THEN 2
END, END,
@@ -280,7 +261,7 @@ router.get('/replenishment/metrics', async (req, res) => {
// If no data, provide dummy data // If no data, provide dummy data
if (!metrics || variants.length === 0) { if (!metrics || variants.length === 0) {
console.log('No replenishment metrics found, returning dummy data'); console.log('No replenishment metrics found in new schema, returning dummy data');
return res.json({ return res.json({
productsToReplenish: 15, productsToReplenish: 15,
@@ -288,11 +269,11 @@ router.get('/replenishment/metrics', async (req, res) => {
replenishmentCost: 15000.00, replenishmentCost: 15000.00,
replenishmentRetail: 30000.00, replenishmentRetail: 30000.00,
topVariants: [ topVariants: [
{ id: 1, title: "Test Product 1", currentStock: 5, replenishQty: 20, replenishCost: 500, replenishRetail: 1000, status: "Critical" }, { id: 1, title: "Test Product 1", currentStock: 5, replenishQty: 20, replenishCost: 500, replenishRetail: 1000, status: "Critical", planningPeriod: "30" },
{ id: 2, title: "Test Product 2", currentStock: 10, replenishQty: 15, replenishCost: 450, replenishRetail: 900, status: "Critical" }, { id: 2, title: "Test Product 2", currentStock: 10, replenishQty: 15, replenishCost: 450, replenishRetail: 900, status: "Critical", planningPeriod: "30" },
{ id: 3, title: "Test Product 3", currentStock: 15, replenishQty: 10, replenishCost: 300, replenishRetail: 600, status: "Reorder" }, { id: 3, title: "Test Product 3", currentStock: 15, replenishQty: 10, replenishCost: 300, replenishRetail: 600, status: "Reorder", planningPeriod: "30" },
{ id: 4, title: "Test Product 4", currentStock: 20, replenishQty: 20, replenishCost: 200, replenishRetail: 400, status: "Reorder" }, { id: 4, title: "Test Product 4", currentStock: 20, replenishQty: 20, replenishCost: 200, replenishRetail: 400, status: "Reorder", planningPeriod: "30" },
{ id: 5, title: "Test Product 5", currentStock: 25, replenishQty: 10, replenishCost: 150, replenishRetail: 300, status: "Reorder" } { id: 5, title: "Test Product 5", currentStock: 25, replenishQty: 10, replenishCost: 150, replenishRetail: 300, status: "Reorder", planningPeriod: "30" }
] ]
}); });
} }
@@ -310,7 +291,8 @@ router.get('/replenishment/metrics', async (req, res) => {
replenishQty: parseInt(v.replenish_qty) || 0, replenishQty: parseInt(v.replenish_qty) || 0,
replenishCost: parseFloat(v.replenish_cost) || 0, replenishCost: parseFloat(v.replenish_cost) || 0,
replenishRetail: parseFloat(v.replenish_retail) || 0, replenishRetail: parseFloat(v.replenish_retail) || 0,
status: v.stock_status status: v.status,
planningPeriod: v.planning_period
})) }))
}; };
@@ -325,11 +307,11 @@ router.get('/replenishment/metrics', async (req, res) => {
replenishmentCost: 15000.00, replenishmentCost: 15000.00,
replenishmentRetail: 30000.00, replenishmentRetail: 30000.00,
topVariants: [ topVariants: [
{ id: 1, title: "Test Product 1", currentStock: 5, replenishQty: 20, replenishCost: 500, replenishRetail: 1000, status: "Critical" }, { id: 1, title: "Test Product 1", currentStock: 5, replenishQty: 20, replenishCost: 500, replenishRetail: 1000, status: "Critical", planningPeriod: "30" },
{ id: 2, title: "Test Product 2", currentStock: 10, replenishQty: 15, replenishCost: 450, replenishRetail: 900, status: "Critical" }, { id: 2, title: "Test Product 2", currentStock: 10, replenishQty: 15, replenishCost: 450, replenishRetail: 900, status: "Critical", planningPeriod: "30" },
{ id: 3, title: "Test Product 3", currentStock: 15, replenishQty: 10, replenishCost: 300, replenishRetail: 600, status: "Reorder" }, { id: 3, title: "Test Product 3", currentStock: 15, replenishQty: 10, replenishCost: 300, replenishRetail: 600, status: "Reorder", planningPeriod: "30" },
{ id: 4, title: "Test Product 4", currentStock: 20, replenishQty: 20, replenishCost: 200, replenishRetail: 400, status: "Reorder" }, { id: 4, title: "Test Product 4", currentStock: 20, replenishQty: 20, replenishCost: 200, replenishRetail: 400, status: "Reorder", planningPeriod: "30" },
{ id: 5, title: "Test Product 5", currentStock: 25, replenishQty: 10, replenishCost: 150, replenishRetail: 300, status: "Reorder" } { id: 5, title: "Test Product 5", currentStock: 25, replenishQty: 10, replenishCost: 150, replenishRetail: 300, status: "Reorder", planningPeriod: "30" }
] ]
}); });
} }
@@ -499,74 +481,15 @@ router.get('/forecast/metrics', async (req, res) => {
// Returns overstock metrics by category // Returns overstock metrics by category
router.get('/overstock/metrics', async (req, res) => { router.get('/overstock/metrics', async (req, res) => {
try { try {
const { rows } = await executeQuery(` // Check if we have any products with Overstock status
WITH category_overstock AS ( const { rows: [countCheck] } = await executeQuery(`
SELECT SELECT COUNT(*) as overstock_count FROM product_metrics WHERE status = 'Overstock'
c.cat_id,
c.name as category_name,
COUNT(DISTINCT CASE
WHEN pm.stock_status = 'Overstocked'
THEN p.pid
END) as overstocked_products,
SUM(CASE
WHEN pm.stock_status = 'Overstocked'
THEN pm.overstocked_amt
ELSE 0
END) as total_excess_units,
SUM(CASE
WHEN pm.stock_status = 'Overstocked'
THEN pm.overstocked_amt * p.cost_price
ELSE 0
END) as total_excess_cost,
SUM(CASE
WHEN pm.stock_status = 'Overstocked'
THEN pm.overstocked_amt * p.price
ELSE 0
END) as total_excess_retail
FROM categories c
JOIN product_categories pc ON c.cat_id = pc.cat_id
JOIN products p ON pc.pid = p.pid
JOIN product_metrics pm ON p.pid = pm.pid
GROUP BY c.cat_id, c.name
),
filtered_categories AS (
SELECT *
FROM category_overstock
WHERE overstocked_products > 0
ORDER BY total_excess_cost DESC
LIMIT 8
),
summary AS (
SELECT
SUM(overstocked_products) as total_overstocked,
SUM(total_excess_units) as total_excess_units,
SUM(total_excess_cost) as total_excess_cost,
SUM(total_excess_retail) as total_excess_retail
FROM filtered_categories
)
SELECT
s.total_overstocked,
s.total_excess_units,
s.total_excess_cost,
s.total_excess_retail,
json_agg(
json_build_object(
'category', fc.category_name,
'products', fc.overstocked_products,
'units', fc.total_excess_units,
'cost', fc.total_excess_cost,
'retail', fc.total_excess_retail
)
) as category_data
FROM summary s, filtered_categories fc
GROUP BY
s.total_overstocked,
s.total_excess_units,
s.total_excess_cost,
s.total_excess_retail
`); `);
if (rows.length === 0) { console.log('Overstock count:', countCheck.overstock_count);
// If no overstock products, return empty metrics
if (parseInt(countCheck.overstock_count) === 0) {
return res.json({ return res.json({
overstockedProducts: 0, overstockedProducts: 0,
total_excess_units: 0, total_excess_units: 0,
@@ -575,31 +498,51 @@ router.get('/overstock/metrics', async (req, res) => {
category_data: [] category_data: []
}); });
} }
// Get summary metrics in a simpler, more direct query
const { rows: [summaryMetrics] } = await executeQuery(`
SELECT
COUNT(DISTINCT pid)::integer as total_overstocked,
SUM(overstocked_units)::integer as total_excess_units,
ROUND(SUM(overstocked_cost)::numeric, 3) as total_excess_cost,
ROUND(SUM(overstocked_retail)::numeric, 3) as total_excess_retail
FROM product_metrics
WHERE status = 'Overstock'
`);
// Get category breakdowns separately
const { rows: categoryData } = await executeQuery(`
SELECT
c.name as category_name,
COUNT(DISTINCT pm.pid)::integer as overstocked_products,
SUM(pm.overstocked_units)::integer as total_excess_units,
ROUND(SUM(pm.overstocked_cost)::numeric, 3) as total_excess_cost,
ROUND(SUM(pm.overstocked_retail)::numeric, 3) as total_excess_retail
FROM categories c
JOIN product_categories pc ON c.cat_id = pc.cat_id
JOIN product_metrics pm ON pc.pid = pm.pid
WHERE pm.status = 'Overstock'
GROUP BY c.name
ORDER BY total_excess_cost DESC
LIMIT 8
`);
// Generate dummy data if the query returned empty results console.log('Summary metrics:', summaryMetrics);
if (rows[0].total_overstocked === null || rows[0].total_excess_units === null) { console.log('Category data count:', categoryData.length);
console.log('Empty overstock metrics results, returning dummy data');
return res.json({
overstockedProducts: 10,
total_excess_units: 500,
total_excess_cost: 5000,
total_excess_retail: 10000,
category_data: [
{ category: "Electronics", products: 3, units: 150, cost: 1500, retail: 3000 },
{ category: "Clothing", products: 4, units: 200, cost: 2000, retail: 4000 },
{ category: "Home Goods", products: 2, units: 100, cost: 1000, retail: 2000 },
{ category: "Office Supplies", products: 1, units: 50, cost: 500, retail: 1000 }
]
});
}
// Format response with explicit type conversion // Format response with explicit type conversion
const response = { const response = {
overstockedProducts: parseInt(rows[0].total_overstocked) || 0, overstockedProducts: parseInt(summaryMetrics.total_overstocked) || 0,
total_excess_units: parseInt(rows[0].total_excess_units) || 0, total_excess_units: parseInt(summaryMetrics.total_excess_units) || 0,
total_excess_cost: parseFloat(rows[0].total_excess_cost) || 0, total_excess_cost: parseFloat(summaryMetrics.total_excess_cost) || 0,
total_excess_retail: parseFloat(rows[0].total_excess_retail) || 0, total_excess_retail: parseFloat(summaryMetrics.total_excess_retail) || 0,
category_data: rows[0].category_data || [] category_data: categoryData.map(cat => ({
category: cat.category_name,
products: parseInt(cat.overstocked_products) || 0,
units: parseInt(cat.total_excess_units) || 0,
cost: parseFloat(cat.total_excess_cost) || 0,
retail: parseFloat(cat.total_excess_retail) || 0
}))
}; };
res.json(response); res.json(response);
@@ -629,27 +572,26 @@ router.get('/overstock/products', async (req, res) => {
try { try {
const { rows } = await executeQuery(` const { rows } = await executeQuery(`
SELECT SELECT
p.pid, pm.pid,
p.SKU, pm.sku AS SKU,
p.title, pm.title,
p.brand, pm.brand,
p.vendor, pm.vendor,
p.stock_quantity, pm.current_stock as stock_quantity,
p.cost_price, pm.current_cost_price as cost_price,
p.price, pm.current_price as price,
pm.daily_sales_avg, pm.sales_velocity_daily as daily_sales_avg,
pm.days_of_inventory, pm.stock_cover_in_days as days_of_inventory,
pm.overstocked_amt, pm.overstocked_units,
(pm.overstocked_amt * p.cost_price) as excess_cost, pm.overstocked_cost as excess_cost,
(pm.overstocked_amt * p.price) as excess_retail, pm.overstocked_retail as excess_retail,
STRING_AGG(c.name, ', ') as categories STRING_AGG(c.name, ', ') as categories
FROM products p FROM product_metrics pm
JOIN product_metrics pm ON p.pid = pm.pid LEFT JOIN product_categories pc ON pm.pid = pc.pid
LEFT JOIN product_categories pc ON p.pid = pc.pid
LEFT JOIN categories c ON pc.cat_id = c.cat_id LEFT JOIN categories c ON pc.cat_id = c.cat_id
WHERE pm.stock_status = 'Overstocked' WHERE pm.status = 'Overstock'
GROUP BY p.pid, p.SKU, p.title, p.brand, p.vendor, p.stock_quantity, p.cost_price, p.price, GROUP BY pm.pid, pm.sku, pm.title, pm.brand, pm.vendor, pm.current_stock, pm.current_cost_price, pm.current_price,
pm.daily_sales_avg, pm.days_of_inventory, pm.overstocked_amt pm.sales_velocity_daily, pm.stock_cover_in_days, pm.overstocked_units, pm.overstocked_cost, pm.overstocked_retail
ORDER BY excess_cost DESC ORDER BY excess_cost DESC
LIMIT $1 LIMIT $1
`, [limit]); `, [limit]);
@@ -827,42 +769,38 @@ router.get('/sales/metrics', async (req, res) => {
const endDate = req.query.endDate || today.toISOString(); const endDate = req.query.endDate || today.toISOString();
try { try {
// Get daily sales data // Get daily orders and totals for the specified period
const { rows: dailyRows } = await executeQuery(` const { rows: dailyRows } = await executeQuery(`
SELECT SELECT
DATE(o.date) as sale_date, DATE(date) as sale_date,
COUNT(DISTINCT o.order_number) as total_orders, COUNT(DISTINCT order_number) as total_orders,
SUM(o.quantity) as total_units, SUM(quantity) as total_units,
SUM(o.price * o.quantity) as total_revenue, SUM(price * quantity) as total_revenue,
SUM(p.cost_price * o.quantity) as total_cogs, SUM(costeach * quantity) as total_cogs
SUM((o.price - p.cost_price) * o.quantity) as total_profit FROM orders
FROM orders o WHERE date BETWEEN $1 AND $2
JOIN products p ON o.pid = p.pid AND canceled = false
WHERE o.canceled = false GROUP BY DATE(date)
AND o.date BETWEEN $1 AND $2
GROUP BY DATE(o.date)
ORDER BY sale_date ORDER BY sale_date
`, [startDate, endDate]); `, [startDate, endDate]);
// Get summary metrics // Get overall metrics for the period
const { rows: metrics } = await executeQuery(` const { rows: [metrics] } = await executeQuery(`
SELECT SELECT
COUNT(DISTINCT o.order_number) as total_orders, COUNT(DISTINCT order_number) as total_orders,
SUM(o.quantity) as total_units, SUM(quantity) as total_units,
SUM(o.price * o.quantity) as total_revenue, SUM(price * quantity) as total_revenue,
SUM(p.cost_price * o.quantity) as total_cogs, SUM(costeach * quantity) as total_cogs
SUM((o.price - p.cost_price) * o.quantity) as total_profit FROM orders
FROM orders o WHERE date BETWEEN $1 AND $2
JOIN products p ON o.pid = p.pid AND canceled = false
WHERE o.canceled = false
AND o.date BETWEEN $1 AND $2
`, [startDate, endDate]); `, [startDate, endDate]);
const response = { const response = {
totalOrders: parseInt(metrics[0]?.total_orders) || 0, totalOrders: parseInt(metrics?.total_orders) || 0,
totalUnitsSold: parseInt(metrics[0]?.total_units) || 0, totalUnitsSold: parseInt(metrics?.total_units) || 0,
totalCogs: parseFloat(metrics[0]?.total_cogs) || 0, totalCogs: parseFloat(metrics?.total_cogs) || 0,
totalRevenue: parseFloat(metrics[0]?.total_revenue) || 0, totalRevenue: parseFloat(metrics?.total_revenue) || 0,
dailySales: dailyRows.map(day => ({ dailySales: dailyRows.map(day => ({
date: day.sale_date, date: day.sale_date,
units: parseInt(day.total_units) || 0, units: parseInt(day.total_units) || 0,
@@ -1304,39 +1242,33 @@ router.get('/inventory-health', async (req, res) => {
}); });
// GET /dashboard/replenish/products // GET /dashboard/replenish/products
// Returns top products that need replenishment // Returns list of products to replenish
router.get('/replenish/products', async (req, res) => { router.get('/replenish/products', async (req, res) => {
const limit = Math.max(1, Math.min(100, parseInt(req.query.limit) || 50)); const limit = parseInt(req.query.limit) || 50;
try { try {
const { rows: products } = await executeQuery(` const { rows } = await executeQuery(`
SELECT SELECT
p.pid, pm.pid,
p.SKU as sku, pm.sku,
p.title, pm.title,
p.stock_quantity, pm.current_stock AS stock_quantity,
pm.daily_sales_avg, pm.sales_velocity_daily AS daily_sales_avg,
pm.reorder_qty, pm.replenishment_units AS reorder_qty,
pm.last_purchase_date pm.date_last_received AS last_purchase_date
FROM products p FROM product_metrics pm
JOIN product_metrics pm ON p.pid = pm.pid WHERE pm.is_replenishable = true
WHERE p.replenishable = true AND (pm.status IN ('Critical', 'Reorder')
AND pm.stock_status IN ('Critical', 'Reorder') OR pm.current_stock < 0)
AND pm.reorder_qty > 0 AND pm.replenishment_units > 0
ORDER BY ORDER BY
CASE pm.stock_status CASE pm.status
WHEN 'Critical' THEN 1 WHEN 'Critical' THEN 1
WHEN 'Reorder' THEN 2 WHEN 'Reorder' THEN 2
END, END,
pm.reorder_qty * p.cost_price DESC pm.replenishment_cost DESC
LIMIT $1 LIMIT $1
`, [limit]); `, [limit]);
res.json(rows);
res.json(products.map(p => ({
...p,
stock_quantity: parseInt(p.stock_quantity) || 0,
daily_sales_avg: parseFloat(p.daily_sales_avg) || 0,
reorder_qty: parseInt(p.reorder_qty) || 0
})));
} catch (err) { } catch (err) {
console.error('Error fetching products to replenish:', err); console.error('Error fetching products to replenish:', err);
res.status(500).json({ error: 'Failed to fetch products to replenish' }); res.status(500).json({ error: 'Failed to fetch products to replenish' });

View File

@@ -0,0 +1,390 @@
const express = require('express');
const router = express.Router();
const { spawn } = require('child_process');
const path = require('path');
const db = require('../utils/db');
// Debug middleware MUST be first
router.use((req, res, next) => {
console.log(`[CSV Route Debug] ${req.method} ${req.path}`);
next();
});
// Store active processes and their progress
let activeImport = null;
let importProgress = null;
let activeFullUpdate = null;
let activeFullReset = null;
// SSE clients for progress updates
const updateClients = new Set();
const importClients = new Set();
const resetClients = new Set();
const resetMetricsClients = new Set();
const calculateMetricsClients = new Set();
const fullUpdateClients = new Set();
const fullResetClients = new Set();
// Helper to send progress to specific clients
function sendProgressToClients(clients, data) {
// If data is a string, send it directly
// If it's an object, convert it to JSON
const message = typeof data === 'string'
? `data: ${data}\n\n`
: `data: ${JSON.stringify(data)}\n\n`;
clients.forEach(client => {
try {
client.write(message);
// Immediately flush the response
if (typeof client.flush === 'function') {
client.flush();
}
} catch (error) {
// Silently remove failed client
clients.delete(client);
}
});
}
// Helper to run a script and stream progress
function runScript(scriptPath, type, clients) {
return new Promise((resolve, reject) => {
// Kill any existing process of this type
let activeProcess;
switch (type) {
case 'update':
if (activeFullUpdate) {
try { activeFullUpdate.kill(); } catch (e) { }
}
activeProcess = activeFullUpdate;
break;
case 'reset':
if (activeFullReset) {
try { activeFullReset.kill(); } catch (e) { }
}
activeProcess = activeFullReset;
break;
}
const child = spawn('node', [scriptPath], {
stdio: ['inherit', 'pipe', 'pipe']
});
switch (type) {
case 'update':
activeFullUpdate = child;
break;
case 'reset':
activeFullReset = child;
break;
}
let output = '';
child.stdout.on('data', (data) => {
const text = data.toString();
output += text;
// Split by lines to handle multiple JSON outputs
const lines = text.split('\n');
lines.filter(line => line.trim()).forEach(line => {
try {
// Try to parse as JSON but don't let it affect the display
const jsonData = JSON.parse(line);
// Only end the process if we get a final status
if (jsonData.status === 'complete' || jsonData.status === 'error' || jsonData.status === 'cancelled') {
if (jsonData.status === 'complete' && !jsonData.operation?.includes('complete')) {
// Don't close for intermediate completion messages
sendProgressToClients(clients, line);
return;
}
// Close only on final completion/error/cancellation
switch (type) {
case 'update':
activeFullUpdate = null;
break;
case 'reset':
activeFullReset = null;
break;
}
if (jsonData.status === 'error') {
reject(new Error(jsonData.error || 'Unknown error'));
} else {
resolve({ output });
}
}
} catch (e) {
// Not JSON, just display as is
}
// Always send the raw line
sendProgressToClients(clients, line);
});
});
child.stderr.on('data', (data) => {
const text = data.toString();
console.error(text);
// Send stderr output directly too
sendProgressToClients(clients, text);
});
child.on('close', (code) => {
switch (type) {
case 'update':
activeFullUpdate = null;
break;
case 'reset':
activeFullReset = null;
break;
}
if (code !== 0) {
const error = `Script ${scriptPath} exited with code ${code}`;
sendProgressToClients(clients, error);
reject(new Error(error));
}
// Don't resolve here - let the completion message from the script trigger the resolve
});
child.on('error', (err) => {
switch (type) {
case 'update':
activeFullUpdate = null;
break;
case 'reset':
activeFullReset = null;
break;
}
sendProgressToClients(clients, err.message);
reject(err);
});
});
}
// Progress endpoints
router.get('/:type/progress', (req, res) => {
const { type } = req.params;
if (!['update', 'reset'].includes(type)) {
return res.status(400).json({ error: 'Invalid operation type' });
}
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Access-Control-Allow-Origin': req.headers.origin || '*',
'Access-Control-Allow-Credentials': 'true'
});
// Add this client to the correct set
const clients = type === 'update' ? fullUpdateClients : fullResetClients;
clients.add(res);
// Send initial connection message
sendProgressToClients(new Set([res]), JSON.stringify({
status: 'running',
operation: 'Initializing connection...'
}));
// Handle client disconnect
req.on('close', () => {
clients.delete(res);
});
});
// Route to cancel active process
router.post('/cancel', (req, res) => {
let killed = false;
// Get the operation type from the request
const { type } = req.query;
const clients = type === 'update' ? fullUpdateClients : fullResetClients;
const activeProcess = type === 'update' ? activeFullUpdate : activeFullReset;
if (activeProcess) {
try {
activeProcess.kill('SIGTERM');
if (type === 'update') {
activeFullUpdate = null;
} else {
activeFullReset = null;
}
killed = true;
sendProgressToClients(clients, JSON.stringify({
status: 'cancelled',
operation: 'Operation cancelled'
}));
} catch (err) {
console.error(`Error killing ${type} process:`, err);
}
}
if (killed) {
res.json({ success: true });
} else {
res.status(404).json({ error: 'No active process to cancel' });
}
});
// POST /csv/full-update - Run full update script
router.post('/full-update', async (req, res) => {
try {
const scriptPath = path.join(__dirname, '../../scripts/full-update.js');
runScript(scriptPath, 'update', fullUpdateClients)
.catch(error => {
console.error('Update failed:', error);
});
res.status(202).json({ message: 'Update started' });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// POST /csv/full-reset - Run full reset script
router.post('/full-reset', async (req, res) => {
try {
const scriptPath = path.join(__dirname, '../../scripts/full-reset.js');
runScript(scriptPath, 'reset', fullResetClients)
.catch(error => {
console.error('Reset failed:', error);
});
res.status(202).json({ message: 'Reset started' });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// GET /history/import - Get recent import history
router.get('/history/import', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
id,
start_time,
end_time,
status,
error_message,
records_added::integer,
records_updated::integer
FROM import_history
ORDER BY start_time DESC
LIMIT 20
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching import history:', error);
res.status(500).json({ error: error.message });
}
});
// GET /history/calculate - Get recent calculation history
router.get('/history/calculate', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
id,
start_time,
end_time,
duration_minutes,
status,
error_message,
total_products,
total_orders,
total_purchase_orders,
processed_products,
processed_orders,
processed_purchase_orders,
additional_info
FROM calculate_history
ORDER BY start_time DESC
LIMIT 20
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching calculate history:', error);
res.status(500).json({ error: error.message });
}
});
// GET /status/modules - Get module calculation status
router.get('/status/modules', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
module_name,
last_calculation_timestamp::timestamp
FROM calculate_status
ORDER BY module_name
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching module status:', error);
res.status(500).json({ error: error.message });
}
});
// GET /status/tables - Get table sync status
router.get('/status/tables', async (req, res) => {
try {
const pool = req.app.locals.pool;
const { rows } = await pool.query(`
SELECT
table_name,
last_sync_timestamp::timestamp
FROM sync_status
ORDER BY table_name
`);
res.json(rows || []);
} catch (error) {
console.error('Error fetching table status:', error);
res.status(500).json({ error: error.message });
}
});
// GET /status/table-counts - Get record counts for all tables
router.get('/status/table-counts', async (req, res) => {
try {
const pool = req.app.locals.pool;
const tables = [
// Core tables
'products', 'categories', 'product_categories', 'orders', 'purchase_orders',
// New metrics tables
'product_metrics', 'daily_product_snapshots','brand_metrics','category_metrics','vendor_metrics',
// Config tables
'settings_global', 'settings_vendor', 'settings_product'
];
const counts = await Promise.all(
tables.map(table =>
pool.query(`SELECT COUNT(*) as count FROM ${table}`)
.then(result => ({
table_name: table,
count: parseInt(result.rows[0].count)
}))
.catch(err => ({
table_name: table,
count: null,
error: err.message
}))
)
);
// Group tables by type
const groupedCounts = {
core: counts.filter(c => ['products', 'categories', 'product_categories', 'orders', 'purchase_orders'].includes(c.table_name)),
metrics: counts.filter(c => ['product_metrics', 'daily_product_snapshots','brand_metrics','category_metrics','vendor_metrics'].includes(c.table_name)),
config: counts.filter(c => ['settings_global', 'settings_vendor', 'settings_product'].includes(c.table_name))
};
res.json(groupedCounts);
} catch (error) {
console.error('Error fetching table counts:', error);
res.status(500).json({ error: error.message });
}
});
module.exports = router;

View File

@@ -23,10 +23,7 @@ router.get('/brands', async (req, res) => {
const { rows } = await pool.query(` const { rows } = await pool.query(`
SELECT DISTINCT COALESCE(p.brand, 'Unbranded') as brand SELECT DISTINCT COALESCE(p.brand, 'Unbranded') as brand
FROM products p FROM products p
JOIN purchase_orders po ON p.pid = po.pid
WHERE p.visible = true WHERE p.visible = true
GROUP BY COALESCE(p.brand, 'Unbranded')
HAVING SUM(po.cost_price * po.received) >= 500
ORDER BY COALESCE(p.brand, 'Unbranded') ORDER BY COALESCE(p.brand, 'Unbranded')
`); `);
@@ -629,163 +626,6 @@ router.get('/:id', async (req, res) => {
} }
}); });
// Import products from CSV
router.post('/import', upload.single('file'), async (req, res) => {
if (!req.file) {
return res.status(400).json({ error: 'No file uploaded' });
}
try {
const result = await importProductsFromCSV(req.file.path, req.app.locals.pool);
// Clean up the uploaded file
require('fs').unlinkSync(req.file.path);
res.json(result);
} catch (error) {
console.error('Error importing products:', error);
res.status(500).json({ error: 'Failed to import products' });
}
});
// Update a product
router.put('/:id', async (req, res) => {
const pool = req.app.locals.pool;
try {
const {
title,
sku,
stock_quantity,
price,
regular_price,
cost_price,
vendor,
brand,
categories,
visible,
managing_stock
} = req.body;
const { rowCount } = await pool.query(
`UPDATE products
SET title = $1,
sku = $2,
stock_quantity = $3,
price = $4,
regular_price = $5,
cost_price = $6,
vendor = $7,
brand = $8,
categories = $9,
visible = $10,
managing_stock = $11
WHERE pid = $12`,
[
title,
sku,
stock_quantity,
price,
regular_price,
cost_price,
vendor,
brand,
categories,
visible,
managing_stock,
req.params.id
]
);
if (rowCount === 0) {
return res.status(404).json({ error: 'Product not found' });
}
res.json({ message: 'Product updated successfully' });
} catch (error) {
console.error('Error updating product:', error);
res.status(500).json({ error: 'Failed to update product' });
}
});
// Get product metrics
router.get('/:id/metrics', async (req, res) => {
const pool = req.app.locals.pool;
try {
const { id } = req.params;
// Get metrics from product_metrics table with inventory health data
const { rows: metrics } = await pool.query(`
WITH inventory_status AS (
SELECT
p.pid,
CASE
WHEN pm.daily_sales_avg = 0 THEN 'New'
WHEN p.stock_quantity <= CEIL(pm.daily_sales_avg * 7) THEN 'Critical'
WHEN p.stock_quantity <= CEIL(pm.daily_sales_avg * 14) THEN 'Reorder'
WHEN p.stock_quantity > (pm.daily_sales_avg * 90) THEN 'Overstocked'
ELSE 'Healthy'
END as calculated_status
FROM products p
LEFT JOIN product_metrics pm ON p.pid = pm.pid
WHERE p.pid = $1
)
SELECT
COALESCE(pm.daily_sales_avg, 0) as daily_sales_avg,
COALESCE(pm.weekly_sales_avg, 0) as weekly_sales_avg,
COALESCE(pm.monthly_sales_avg, 0) as monthly_sales_avg,
COALESCE(pm.days_of_inventory, 0) as days_of_inventory,
COALESCE(pm.reorder_point, CEIL(COALESCE(pm.daily_sales_avg, 0) * 14)) as reorder_point,
COALESCE(pm.safety_stock, CEIL(COALESCE(pm.daily_sales_avg, 0) * 7)) as safety_stock,
COALESCE(pm.avg_margin_percent,
((p.price - COALESCE(p.cost_price, 0)) / NULLIF(p.price, 0)) * 100
) as avg_margin_percent,
COALESCE(pm.total_revenue, 0) as total_revenue,
COALESCE(pm.inventory_value, p.stock_quantity * COALESCE(p.cost_price, 0)) as inventory_value,
COALESCE(pm.turnover_rate, 0) as turnover_rate,
COALESCE(pm.abc_class, 'C') as abc_class,
COALESCE(pm.stock_status, is.calculated_status) as stock_status,
COALESCE(pm.avg_lead_time_days, 0) as avg_lead_time_days,
COALESCE(pm.current_lead_time, 0) as current_lead_time,
COALESCE(pm.target_lead_time, 14) as target_lead_time,
COALESCE(pm.lead_time_status, 'Unknown') as lead_time_status,
COALESCE(pm.reorder_qty, 0) as reorder_qty,
COALESCE(pm.overstocked_amt, 0) as overstocked_amt
FROM products p
LEFT JOIN product_metrics pm ON p.pid = pm.pid
LEFT JOIN inventory_status is ON p.pid = is.pid
WHERE p.pid = $2
`, [id, id]);
if (!metrics.length) {
// Return default metrics structure if no data found
res.json({
daily_sales_avg: 0,
weekly_sales_avg: 0,
monthly_sales_avg: 0,
days_of_inventory: 0,
reorder_point: 0,
safety_stock: 0,
avg_margin_percent: 0,
total_revenue: 0,
inventory_value: 0,
turnover_rate: 0,
abc_class: 'C',
stock_status: 'New',
avg_lead_time_days: 0,
current_lead_time: 0,
target_lead_time: 14,
lead_time_status: 'Unknown',
reorder_qty: 0,
overstocked_amt: 0
});
return;
}
res.json(metrics[0]);
} catch (error) {
console.error('Error fetching product metrics:', error);
res.status(500).json({ error: 'Failed to fetch product metrics' });
}
});
// Get product time series data // Get product time series data
router.get('/:id/time-series', async (req, res) => { router.get('/:id/time-series', async (req, res) => {
const { id } = req.params; const { id } = req.params;

View File

@@ -8,7 +8,7 @@ const { initPool } = require('./utils/db');
const productsRouter = require('./routes/products'); const productsRouter = require('./routes/products');
const dashboardRouter = require('./routes/dashboard'); const dashboardRouter = require('./routes/dashboard');
const ordersRouter = require('./routes/orders'); const ordersRouter = require('./routes/orders');
const csvRouter = require('./routes/csv'); const csvRouter = require('./routes/data-management');
const analyticsRouter = require('./routes/analytics'); const analyticsRouter = require('./routes/analytics');
const purchaseOrdersRouter = require('./routes/purchase-orders'); const purchaseOrdersRouter = require('./routes/purchase-orders');
const configRouter = require('./routes/config'); const configRouter = require('./routes/config');

View File

@@ -0,0 +1,239 @@
const { Client } = require('ssh2');
const mysql = require('mysql2/promise');
const fs = require('fs');
// Connection pooling and cache configuration
const connectionCache = {
ssh: null,
dbConnection: null,
lastUsed: 0,
isConnecting: false,
connectionPromise: null,
// Cache expiration time in milliseconds (5 minutes)
expirationTime: 5 * 60 * 1000,
// Cache for query results (key: query string, value: {data, timestamp})
queryCache: new Map(),
// Cache duration for different query types in milliseconds
cacheDuration: {
'field-options': 30 * 60 * 1000, // 30 minutes for field options
'product-lines': 10 * 60 * 1000, // 10 minutes for product lines
'sublines': 10 * 60 * 1000, // 10 minutes for sublines
'taxonomy': 30 * 60 * 1000, // 30 minutes for taxonomy data
'default': 60 * 1000 // 1 minute default
}
};
/**
* Get a database connection with connection pooling
* @returns {Promise<{ssh: object, connection: object}>} The SSH and database connection
*/
async function getDbConnection() {
const now = Date.now();
// Check if we need to refresh the connection due to inactivity
const needsRefresh = !connectionCache.ssh ||
!connectionCache.dbConnection ||
(now - connectionCache.lastUsed > connectionCache.expirationTime);
// If connection is still valid, update last used time and return existing connection
if (!needsRefresh) {
connectionCache.lastUsed = now;
return {
ssh: connectionCache.ssh,
connection: connectionCache.dbConnection
};
}
// If another request is already establishing a connection, wait for that promise
if (connectionCache.isConnecting && connectionCache.connectionPromise) {
try {
await connectionCache.connectionPromise;
return {
ssh: connectionCache.ssh,
connection: connectionCache.dbConnection
};
} catch (error) {
// If that connection attempt failed, we'll try again below
console.error('Error waiting for existing connection:', error);
}
}
// Close existing connections if they exist
if (connectionCache.dbConnection) {
try {
await connectionCache.dbConnection.end();
} catch (error) {
console.error('Error closing existing database connection:', error);
}
}
if (connectionCache.ssh) {
try {
connectionCache.ssh.end();
} catch (error) {
console.error('Error closing existing SSH connection:', error);
}
}
// Mark that we're establishing a new connection
connectionCache.isConnecting = true;
// Create a new promise for this connection attempt
connectionCache.connectionPromise = setupSshTunnel().then(tunnel => {
const { ssh, stream, dbConfig } = tunnel;
return mysql.createConnection({
...dbConfig,
stream
}).then(connection => {
// Store the new connections
connectionCache.ssh = ssh;
connectionCache.dbConnection = connection;
connectionCache.lastUsed = Date.now();
connectionCache.isConnecting = false;
return {
ssh,
connection
};
});
}).catch(error => {
connectionCache.isConnecting = false;
throw error;
});
// Wait for the connection to be established
return connectionCache.connectionPromise;
}
/**
* Get cached query results or execute query if not cached
* @param {string} cacheKey - Unique key to identify the query
* @param {string} queryType - Type of query (field-options, product-lines, etc.)
* @param {Function} queryFn - Function to execute if cache miss
* @returns {Promise<any>} The query result
*/
async function getCachedQuery(cacheKey, queryType, queryFn) {
// Get cache duration based on query type
const cacheDuration = connectionCache.cacheDuration[queryType] || connectionCache.cacheDuration.default;
// Check if we have a valid cached result
const cachedResult = connectionCache.queryCache.get(cacheKey);
const now = Date.now();
if (cachedResult && (now - cachedResult.timestamp < cacheDuration)) {
console.log(`Cache hit for ${queryType} query: ${cacheKey}`);
return cachedResult.data;
}
// No valid cache found, execute the query
console.log(`Cache miss for ${queryType} query: ${cacheKey}`);
const result = await queryFn();
// Cache the result
connectionCache.queryCache.set(cacheKey, {
data: result,
timestamp: now
});
return result;
}
/**
* Setup SSH tunnel to production database
* @private - Should only be used by getDbConnection
* @returns {Promise<{ssh: object, stream: object, dbConfig: object}>}
*/
async function setupSshTunnel() {
const sshConfig = {
host: process.env.PROD_SSH_HOST,
port: process.env.PROD_SSH_PORT || 22,
username: process.env.PROD_SSH_USER,
privateKey: process.env.PROD_SSH_KEY_PATH
? fs.readFileSync(process.env.PROD_SSH_KEY_PATH)
: undefined,
compress: true
};
const dbConfig = {
host: process.env.PROD_DB_HOST || 'localhost',
user: process.env.PROD_DB_USER,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
port: process.env.PROD_DB_PORT || 3306,
timezone: 'Z'
};
return new Promise((resolve, reject) => {
const ssh = new Client();
ssh.on('error', (err) => {
console.error('SSH connection error:', err);
reject(err);
});
ssh.on('ready', () => {
ssh.forwardOut(
'127.0.0.1',
0,
dbConfig.host,
dbConfig.port,
(err, stream) => {
if (err) reject(err);
resolve({ ssh, stream, dbConfig });
}
);
}).connect(sshConfig);
});
}
/**
* Clear cached query results
* @param {string} [cacheKey] - Specific cache key to clear (clears all if not provided)
*/
function clearQueryCache(cacheKey) {
if (cacheKey) {
connectionCache.queryCache.delete(cacheKey);
console.log(`Cleared cache for key: ${cacheKey}`);
} else {
connectionCache.queryCache.clear();
console.log('Cleared all query cache');
}
}
/**
* Force close all active connections
* Useful for server shutdown or manual connection reset
*/
async function closeAllConnections() {
if (connectionCache.dbConnection) {
try {
await connectionCache.dbConnection.end();
console.log('Closed database connection');
} catch (error) {
console.error('Error closing database connection:', error);
}
connectionCache.dbConnection = null;
}
if (connectionCache.ssh) {
try {
connectionCache.ssh.end();
console.log('Closed SSH connection');
} catch (error) {
console.error('Error closing SSH connection:', error);
}
connectionCache.ssh = null;
}
connectionCache.lastUsed = 0;
connectionCache.isConnecting = false;
connectionCache.connectionPromise = null;
}
module.exports = {
getDbConnection,
getCachedQuery,
clearQueryCache,
closeAllConnections
};

View File

@@ -38,21 +38,22 @@ export function CategoryPerformance() {
const rawData = await response.json(); const rawData = await response.json();
return { return {
performance: rawData.performance.map((item: any) => ({ performance: rawData.performance.map((item: any) => ({
...item, category: item.category || '',
categoryPath: item.categoryPath || item.category, categoryPath: item.categoryPath || item.categorypath || item.category || '',
revenue: Number(item.revenue) || 0, revenue: Number(item.revenue) || 0,
profit: Number(item.profit) || 0, profit: Number(item.profit) || 0,
growth: Number(item.growth) || 0, growth: Number(item.growth) || 0,
productCount: Number(item.productCount) || 0 productCount: Number(item.productCount) || Number(item.productcount) || 0
})), })),
distribution: rawData.distribution.map((item: any) => ({ distribution: rawData.distribution.map((item: any) => ({
...item, category: item.category || '',
categoryPath: item.categoryPath || item.category, categoryPath: item.categoryPath || item.categorypath || item.category || '',
value: Number(item.value) || 0 value: Number(item.value) || 0
})), })),
trends: rawData.trends.map((item: any) => ({ trends: rawData.trends.map((item: any) => ({
...item, category: item.category || '',
categoryPath: item.categoryPath || item.category, categoryPath: item.categoryPath || item.categorypath || item.category || '',
month: item.month || '',
sales: Number(item.sales) || 0 sales: Number(item.sales) || 0
})) }))
}; };

View File

@@ -25,41 +25,91 @@ interface PriceData {
} }
export function PriceAnalysis() { export function PriceAnalysis() {
const { data, isLoading } = useQuery<PriceData>({ const { data, isLoading, error } = useQuery<PriceData>({
queryKey: ['price-analysis'], queryKey: ['price-analysis'],
queryFn: async () => { queryFn: async () => {
const response = await fetch(`${config.apiUrl}/analytics/pricing`); try {
if (!response.ok) { const response = await fetch(`${config.apiUrl}/analytics/pricing`);
throw new Error('Failed to fetch price analysis'); if (!response.ok) {
throw new Error(`Failed to fetch: ${response.status}`);
}
const rawData = await response.json();
if (!rawData || !rawData.pricePoints) {
return {
pricePoints: [],
elasticity: [],
recommendations: []
};
}
return {
pricePoints: (rawData.pricePoints || []).map((item: any) => ({
price: Number(item.price) || 0,
salesVolume: Number(item.salesVolume || item.salesvolume) || 0,
revenue: Number(item.revenue) || 0,
category: item.category || ''
})),
elasticity: (rawData.elasticity || []).map((item: any) => ({
date: item.date || '',
price: Number(item.price) || 0,
demand: Number(item.demand) || 0
})),
recommendations: (rawData.recommendations || []).map((item: any) => ({
product: item.product || '',
currentPrice: Number(item.currentPrice || item.currentprice) || 0,
recommendedPrice: Number(item.recommendedPrice || item.recommendedprice) || 0,
potentialRevenue: Number(item.potentialRevenue || item.potentialrevenue) || 0,
confidence: Number(item.confidence) || 0
}))
};
} catch (err) {
console.error('Error fetching price data:', err);
throw err;
} }
const rawData = await response.json();
return {
pricePoints: rawData.pricePoints.map((item: any) => ({
...item,
price: Number(item.price) || 0,
salesVolume: Number(item.salesVolume) || 0,
revenue: Number(item.revenue) || 0
})),
elasticity: rawData.elasticity.map((item: any) => ({
...item,
price: Number(item.price) || 0,
demand: Number(item.demand) || 0
})),
recommendations: rawData.recommendations.map((item: any) => ({
...item,
currentPrice: Number(item.currentPrice) || 0,
recommendedPrice: Number(item.recommendedPrice) || 0,
potentialRevenue: Number(item.potentialRevenue) || 0,
confidence: Number(item.confidence) || 0
}))
};
}, },
retry: 1
}); });
if (isLoading || !data) { if (isLoading) {
return <div>Loading price analysis...</div>; return <div>Loading price analysis...</div>;
} }
if (error || !data) {
return (
<Card className="mb-4">
<CardHeader>
<CardTitle>Price Analysis</CardTitle>
</CardHeader>
<CardContent>
<p className="text-red-500">
Unable to load price analysis. The price metrics may need to be set up in the database.
</p>
</CardContent>
</Card>
);
}
// Early return if no data to display
if (
data.pricePoints.length === 0 &&
data.elasticity.length === 0 &&
data.recommendations.length === 0
) {
return (
<Card className="mb-4">
<CardHeader>
<CardTitle>Price Analysis</CardTitle>
</CardHeader>
<CardContent>
<p className="text-muted-foreground">
No price data available. This may be because the price metrics haven't been calculated yet.
</p>
</CardContent>
</Card>
);
}
return ( return (
<div className="grid gap-4"> <div className="grid gap-4">
<div className="grid gap-4 md:grid-cols-2"> <div className="grid gap-4 md:grid-cols-2">

View File

@@ -38,22 +38,23 @@ export function ProfitAnalysis() {
const rawData = await response.json(); const rawData = await response.json();
return { return {
byCategory: rawData.byCategory.map((item: any) => ({ byCategory: rawData.byCategory.map((item: any) => ({
...item, category: item.category || '',
categoryPath: item.categoryPath || item.category, categoryPath: item.categorypath || item.category || '',
profitMargin: Number(item.profitMargin) || 0, profitMargin: item.profitmargin !== null ? Number(item.profitmargin) : 0,
revenue: Number(item.revenue) || 0, revenue: Number(item.revenue) || 0,
cost: Number(item.cost) || 0 cost: Number(item.cost) || 0
})), })),
overTime: rawData.overTime.map((item: any) => ({ overTime: rawData.overTime.map((item: any) => ({
...item, date: item.date || '',
profitMargin: Number(item.profitMargin) || 0, profitMargin: item.profitmargin !== null ? Number(item.profitmargin) : 0,
revenue: Number(item.revenue) || 0, revenue: Number(item.revenue) || 0,
cost: Number(item.cost) || 0 cost: Number(item.cost) || 0
})), })),
topProducts: rawData.topProducts.map((item: any) => ({ topProducts: rawData.topProducts.map((item: any) => ({
...item, product: item.product || '',
categoryPath: item.categoryPath || item.category, category: item.category || '',
profitMargin: Number(item.profitMargin) || 0, categoryPath: item.categorypath || item.category || '',
profitMargin: item.profitmargin !== null ? Number(item.profitmargin) : 0,
revenue: Number(item.revenue) || 0, revenue: Number(item.revenue) || 0,
cost: Number(item.cost) || 0 cost: Number(item.cost) || 0
})) }))

View File

@@ -28,42 +28,93 @@ interface StockData {
} }
export function StockAnalysis() { export function StockAnalysis() {
const { data, isLoading } = useQuery<StockData>({ const { data, isLoading, error } = useQuery<StockData>({
queryKey: ['stock-analysis'], queryKey: ['stock-analysis'],
queryFn: async () => { queryFn: async () => {
const response = await fetch(`${config.apiUrl}/analytics/stock`); try {
if (!response.ok) { const response = await fetch(`${config.apiUrl}/analytics/stock`);
throw new Error('Failed to fetch stock analysis'); if (!response.ok) {
throw new Error(`Failed to fetch: ${response.status}`);
}
const rawData = await response.json();
if (!rawData || !rawData.turnoverByCategory) {
return {
turnoverByCategory: [],
stockLevels: [],
criticalItems: []
};
}
return {
turnoverByCategory: (rawData.turnoverByCategory || []).map((item: any) => ({
category: item.category || '',
turnoverRate: Number(item.turnoverRate || item.turnoverrate) || 0,
averageStock: Number(item.averageStock || item.averagestock) || 0,
totalSales: Number(item.totalSales || item.totalsales) || 0
})),
stockLevels: (rawData.stockLevels || []).map((item: any) => ({
date: item.date || '',
inStock: Number(item.inStock || item.instock) || 0,
lowStock: Number(item.lowStock || item.lowstock) || 0,
outOfStock: Number(item.outOfStock || item.outofstock) || 0
})),
criticalItems: (rawData.criticalItems || []).map((item: any) => ({
product: item.product || '',
sku: item.sku || '',
stockQuantity: Number(item.stockQuantity || item.stockquantity) || 0,
reorderPoint: Number(item.reorderPoint || item.reorderpoint) || 0,
turnoverRate: Number(item.turnoverRate || item.turnoverrate) || 0,
daysUntilStockout: Number(item.daysUntilStockout || item.daysuntilstockout) || 0
}))
};
} catch (err) {
console.error('Error fetching stock data:', err);
throw err;
} }
const rawData = await response.json();
return {
turnoverByCategory: rawData.turnoverByCategory.map((item: any) => ({
...item,
turnoverRate: Number(item.turnoverRate) || 0,
averageStock: Number(item.averageStock) || 0,
totalSales: Number(item.totalSales) || 0
})),
stockLevels: rawData.stockLevels.map((item: any) => ({
...item,
inStock: Number(item.inStock) || 0,
lowStock: Number(item.lowStock) || 0,
outOfStock: Number(item.outOfStock) || 0
})),
criticalItems: rawData.criticalItems.map((item: any) => ({
...item,
stockQuantity: Number(item.stockQuantity) || 0,
reorderPoint: Number(item.reorderPoint) || 0,
turnoverRate: Number(item.turnoverRate) || 0,
daysUntilStockout: Number(item.daysUntilStockout) || 0
}))
};
}, },
retry: 1
}); });
if (isLoading || !data) { if (isLoading) {
return <div>Loading stock analysis...</div>; return <div>Loading stock analysis...</div>;
} }
if (error || !data) {
return (
<Card className="mb-4">
<CardHeader>
<CardTitle>Stock Analysis</CardTitle>
</CardHeader>
<CardContent>
<p className="text-red-500">
Unable to load stock analysis. The stock metrics may need to be set up in the database.
</p>
</CardContent>
</Card>
);
}
// Early return if no data to display
if (
data.turnoverByCategory.length === 0 &&
data.stockLevels.length === 0 &&
data.criticalItems.length === 0
) {
return (
<Card className="mb-4">
<CardHeader>
<CardTitle>Stock Analysis</CardTitle>
</CardHeader>
<CardContent>
<p className="text-muted-foreground">
No stock data available. This may be because the stock metrics haven't been calculated yet.
</p>
</CardContent>
</Card>
);
}
const getStockStatus = (daysUntilStockout: number) => { const getStockStatus = (daysUntilStockout: number) => {
if (daysUntilStockout <= 7) { if (daysUntilStockout <= 7) {
return <Badge variant="destructive">Critical</Badge>; return <Badge variant="destructive">Critical</Badge>;

View File

@@ -58,22 +58,22 @@ export function VendorPerformance() {
// Create a complete structure even if some parts are missing // Create a complete structure even if some parts are missing
const data: VendorData = { const data: VendorData = {
performance: rawData.performance.map((vendor: any) => ({ performance: rawData.performance.map((vendor: any) => ({
vendor: vendor.vendor, vendor: vendor.vendor || '',
salesVolume: Number(vendor.salesVolume) || 0, salesVolume: vendor.salesVolume !== null ? Number(vendor.salesVolume) : 0,
profitMargin: Number(vendor.profitMargin) || 0, profitMargin: vendor.profitMargin !== null ? Number(vendor.profitMargin) : 0,
stockTurnover: Number(vendor.stockTurnover) || 0, stockTurnover: vendor.stockTurnover !== null ? Number(vendor.stockTurnover) : 0,
productCount: Number(vendor.productCount) || 0, productCount: Number(vendor.productCount) || 0,
growth: Number(vendor.growth) || 0 growth: vendor.growth !== null ? Number(vendor.growth) : 0
})), })),
comparison: rawData.comparison?.map((vendor: any) => ({ comparison: rawData.comparison?.map((vendor: any) => ({
vendor: vendor.vendor, vendor: vendor.vendor || '',
salesPerProduct: Number(vendor.salesPerProduct) || 0, salesPerProduct: vendor.salesPerProduct !== null ? Number(vendor.salesPerProduct) : 0,
averageMargin: Number(vendor.averageMargin) || 0, averageMargin: vendor.averageMargin !== null ? Number(vendor.averageMargin) : 0,
size: Number(vendor.size) || 0 size: Number(vendor.size) || 0
})) || [], })) || [],
trends: rawData.trends?.map((vendor: any) => ({ trends: rawData.trends?.map((vendor: any) => ({
vendor: vendor.vendor, vendor: vendor.vendor || '',
month: vendor.month, month: vendor.month || '',
sales: Number(vendor.sales) || 0 sales: Number(vendor.sales) || 0
})) || [] })) || []
}; };