Compare commits
12 Commits
Improve-ca
...
f4f6215d03
| Author | SHA1 | Date | |
|---|---|---|---|
| f4f6215d03 | |||
| a9bccd4d01 | |||
| 90379386d6 | |||
| 09f7103472 | |||
| d8fd64cf62 | |||
| 619409847d | |||
| eea57528ab | |||
| 3d2d1b3946 | |||
| d936d50f83 | |||
| 610e26689c | |||
| 7ff757203f | |||
| 843ce71506 |
@@ -1,185 +0,0 @@
|
|||||||
1. **Missing Updates for Reorder Point and Safety Stock** [RESOLVED - product-metrics.js]
|
|
||||||
- **Problem:** In the **product_metrics** table (used by the inventory health view), the fields **reorder_point** and **safety_stock** are never updated in the product metrics calculations. Although a helper function (`calculateReorderQuantities`) exists and computes these values, the update query in the `calculateProductMetrics` function does not assign any values to these columns.
|
|
||||||
- **Effect:** The inventory health view relies on these fields (using COALESCE to default them to 0), which means that stock might never be classified as "Reorder" or "Healthy" based on the proper reorder point or safety stock calculations.
|
|
||||||
- **Example:** Even if a product's base metrics would require a reorder (for example, if its days of inventory are low), the view always shows a value of 0 for reorder_point and safety_stock.
|
|
||||||
- **Fix:** Update the product metrics query (or add a subsequent update) so that **pm.reorder_point** and **pm.safety_stock** are calculated (for instance, by integrating the logic from `calculateReorderQuantities`) and stored in the table.
|
|
||||||
|
|
||||||
2. **Overwritten Module Exports When Combining Scripts** [RESOLVED - calculate-metrics.js]
|
|
||||||
- **Problem:** The code provided shows two distinct exports. The main metrics calculation module exports `calculateMetrics` (along with cancel and getProgress helpers), but later in the same concatenated file the module exports are overwritten.
|
|
||||||
- **Effect:** If these two code sections end up in a single module file, the export for the main calculation will be lost. This would break any code that calls the overall metrics calculation.
|
|
||||||
- **Example:** An external caller expecting to run `calculateMetrics` would instead receive the `calculateProductMetrics` function.
|
|
||||||
- **Fix:** Make sure each script resides in its own module file. Verify that the module boundaries and exports are not accidentally merged or overwritten when deployed.
|
|
||||||
|
|
||||||
3. **Potential Formula Issue in EOQ Calculation (Reorder Qty)** [RESOLVED - product-metrics.js]
|
|
||||||
- **Problem:** The helper function `calculateReorderQuantities` uses an EOQ formula with a holding cost expressed as a percentage (0.25) rather than a per‐unit cost.
|
|
||||||
- **Effect:** If the intent was to use the traditional EOQ formula (which expects a holding cost per unit rather than a percentage), this could lead to an incorrect reorder quantity.
|
|
||||||
- **Example:** For a given annual demand and fixed order cost, the computed reorder quantity might be higher or lower than expected.
|
|
||||||
- **Fix:** Double-check the EOQ formula. If the intention is to compute based on a percentage, then document that clearly; otherwise, adjust the formula to use the proper holding cost value.
|
|
||||||
|
|
||||||
4. **Potential Overlap or Redundancy in GMROI Calculation** [RESOLVED - time-aggregates.js]
|
|
||||||
- **Problem:** In the time aggregates function, GMROI is calculated in two steps. The initial INSERT query computes GMROI as
|
|
||||||
|
|
||||||
`CASE WHEN s.inventory_value > 0 THEN (s.total_revenue - s.total_cost) / s.inventory_value ELSE 0 END`
|
|
||||||
|
|
||||||
and then a subsequent UPDATE query recalculates it as an annualized value using gross profit and active days.
|
|
||||||
|
|
||||||
|
|
||||||
- **Effect:** Overwriting a computed value may be intentional to refine the metric, but if not coordinated it can cause confusion or unexpected output in the `product_time_aggregates` table.
|
|
||||||
- **Example:** A product's GMROI might first appear as a simple ratio but then be updated to a scaled value based on the number of active days, which could lead to inconsistent reporting if not documented.
|
|
||||||
- **Fix:** Consolidated the GMROI calculation into a single step in the initial INSERT query, properly handling annualization and NULL values.
|
|
||||||
|
|
||||||
5. **Handling of Products Without Orders or Purchase Data** [RESOLVED - time-aggregates.js]
|
|
||||||
- **Problem:** In the INSERT query of the time aggregates function, the UNION covers two cases: one for products with order data (from `monthly_sales`) and one for products that have entries in `monthly_stock` but no matching order data.
|
|
||||||
- **Effect:** If a product has neither orders nor purchase orders, it won't get an entry in `product_time_aggregates`. Depending on business rules, this might be acceptable or might mean missing data.
|
|
||||||
- **Example:** A product that's new or rarely ordered might not appear in the time aggregates view, potentially affecting downstream calculations.
|
|
||||||
- **Fix:** Added an `all_products` CTE and modified the JOIN structure to ensure every product gets an entry with appropriate default values, even if it has no orders or purchase orders.
|
|
||||||
|
|
||||||
6. **Redundant Recalculation of Vendor Metrics**
|
|
||||||
- **Problem:** Similar concepts from prior scripts where cumulative metrics (like **total_revenue** and **total_cost**) are calculated in multiple query steps without necessary validation or optimization. In the vendor metrics script, calculations for total revenue and margin are performed within a `WITH` clause, which is then used in other parts of the process, making it more complex than needed.
|
|
||||||
- **Effect:** There's unnecessary duplication in querying the same data multiple times across subqueries. It could result in decreased performance and may even lead to excess computation if the subqueries are not optimized or correctly indexed.
|
|
||||||
- **Example:** Vendor sales and vendor purchase orders (PO) metrics are calculated in separate `WITH` clauses, leading to repeated calculations.
|
|
||||||
- **Fix:** Synthesize the required metrics into fewer queries or reuse the results within the `WITH` clause itself. Avoid redundant calculations of **revenue** and **cost** unless truly necessary.
|
|
||||||
|
|
||||||
7. **Handling Products Without Orders or Purchase Orders**
|
|
||||||
- **Problem:** In your `calculateVendorMetrics` script, the initial insert for vendor sales doesn't fully address the products that might not have matching orders or purchase orders. If a vendor has products without any sales within the last 12 months, the results may not be fully accurate unless handled explicitly.
|
|
||||||
- **Effect:** If no orders exist for a product associated with a particular vendor, that product will not contribute to the vendor's metrics, potentially omitting important data when calculating **total_orders** or **total_revenue**.
|
|
||||||
- **Example:** The scripted statistics fill gaps, but products with no recent purchase or sales orders might not be counted accurately.
|
|
||||||
- **Fix:** Include logic to handle scenarios where these products still need to be part of the vendor calculation. Use a `LEFT JOIN` wherever possible to account for cases without sales or purchase orders.
|
|
||||||
|
|
||||||
8. **Redundant `ON DUPLICATE KEY UPDATE`**
|
|
||||||
- **Problem:** Multiple queries in the `calculateVendorMetrics` script use `ON DUPLICATE KEY UPDATE` clauses to handle repeated metrics updates. This is useful for ensuring the most up-to-date calculations but can cause inconsistencies if multiple calculations happen for the same product or vendor simultaneously.
|
|
||||||
- **Effect:** This approach can lead to an inaccurate update of brand-specific data when insertion and update overlap. Each time you add a new batch, an existing entry could be overwritten if not handled correctly.
|
|
||||||
- **Example:** Vendor country, category, or sales-related metrics could unintentionally update during processing.
|
|
||||||
- **Fix:** Match on current status more robustly in case of existing rows to avoid unnecessary updates. Ensure that the key used for `ON DUPLICATE KEY` aligns with any foreign key relationships that might indicate an already processed entry.
|
|
||||||
|
|
||||||
9. **SQL Query Performance with Multiple Nested `WITH` Clauses**
|
|
||||||
- **Problem:** Heavily nested queries (especially **WITH** clauses) may lead to slow performance depending on the size of the dataset.
|
|
||||||
- **Effect:** Computational burden could be high when the database is large, e.g., querying **purchase orders**, **vendor sales**, and **product info** simultaneously. Even with proper indexes, the deployment might struggle in production environments.
|
|
||||||
- **Example:** Multiple `WITH` clauses in the vendor and brand metrics calculation scripts might work fine in small datasets but degrade performance in production.
|
|
||||||
- **Fix:** Combine some subqueries and reduce the layer of computations needed for calculating final metrics. Test performance on a production-sized dataset to see how nested queries are handled.
|
|
||||||
|
|
||||||
10. **Missing Updates for Reorder Metrics (Vendor/Brand)**
|
|
||||||
- **Previously Identified Issue:** Inconsistent updates for **reorder_point** and **safety_stock** across earlier scripts.
|
|
||||||
- **Current Impact on This Script:** The vendor and brand metrics do not have explicit updates for reorder point or safety stock, which are essential for inventory evaluation.
|
|
||||||
- **Effect:** The correct thresholds and reorder logic for vendor product inventory aren't fully accounted for in these scripts.
|
|
||||||
- **Fix:** Integrate relevant logic to update **reorder_point** or **safety_stock** within the vendor and brand metrics calculations. Ensure that it's consistently computed and stored.
|
|
||||||
|
|
||||||
11. **Data Integrity and Consistency**
|
|
||||||
|
|
||||||
**w**hen tracking sales growth or performance
|
|
||||||
|
|
||||||
|
|
||||||
- **Problem:** Brand metrics include a sales growth clause where negative results can sometimes be skewed severely if period data varies considerably.
|
|
||||||
- **Effect:** If period boundaries are incorrect or records are missing, this can create drastic growth rate calculations.
|
|
||||||
- **Example:** If the "previous" period has no sales but "current" has a substantial increase, the growth rate will show as **100%**.
|
|
||||||
- **Fix:** Implement checks that ensure both periods are valid and that the system calculates growth accurately, avoiding growth rates based solely on potential outliers. Replace consistent gaps with a no-growth rate or a meaningful zero.
|
|
||||||
|
|
||||||
12. **Exclusion of Vendors With No Sales**
|
|
||||||
|
|
||||||
The vendor metrics query is driven by the `vendor_sales` CTE, which aggregates data only for vendors that have orders in the past 12 months.
|
|
||||||
|
|
||||||
|
|
||||||
- **Impact:** Vendors that have purchase activity (or simply exist in vendor_details) but no recent sales won't show up in vendor_metrics. This could cause the frontend to miss metrics for vendors that might still be important.
|
|
||||||
- **Fix:** Consider adding a UNION or changing the driving set so that all vendors (for example, from vendor_details) are included—even if they have zero sales.
|
|
||||||
13. **Identical Formulas for On-Time Delivery and Order Fill Rates**
|
|
||||||
|
|
||||||
Both metrics are calculated as `(received_orders / total_orders) * 100`.
|
|
||||||
|
|
||||||
|
|
||||||
- **Impact:** If the business expects these to be distinct (for example, one might factor in on-time receipt versus mere receipt), then showing identical values on the frontend could be misleading.
|
|
||||||
- **Fix:** Verify and adjust the formulas if on-time delivery and order fill rates should be computed differently.
|
|
||||||
14. **Handling Nulls and Defaults in Aggregations**
|
|
||||||
|
|
||||||
The query uses COALESCE in most places, but be sure that every aggregated value (like average lead time) correctly defaults when no data is present.
|
|
||||||
|
|
||||||
|
|
||||||
- **Impact:** Incorrect defaults might cause odd or missing numbers on the production interface.
|
|
||||||
- **Fix:** Double-check that all numeric aggregates reliably default to 0 where needed.
|
|
||||||
|
|
||||||
15. **Inconsistent Stock Filtering Conditions**
|
|
||||||
|
|
||||||
In the main brand metrics query the CTE filters products with the condition
|
|
||||||
|
|
||||||
`p.stock_quantity <= 5000 AND p.stock_quantity >= 0`
|
|
||||||
|
|
||||||
whereas in the brand time-based metrics query the condition is only `p.stock_quantity <= 5000`.
|
|
||||||
|
|
||||||
|
|
||||||
- **Impact:** This discrepancy may lead to inconsistent numbers (for example, if any products have negative stock, which might be due to data issues) between overall brand metrics and time-based metrics on the frontend.
|
|
||||||
- **Fix:** Standardize the filtering criteria so that both queries treat out-of-range stock values in the same way.
|
|
||||||
16. **Growth Rate Calculation Periods**
|
|
||||||
|
|
||||||
The growth rate is computed by comparing revenue from the last 3 months ("current") against a period from 15–12 months ago ("previous").
|
|
||||||
|
|
||||||
|
|
||||||
- **Impact:** This narrow window may not reflect typical year-over-year performance and could lead to volatile or unexpected growth percentages on the frontend.
|
|
||||||
- **Fix:** Revisit the business logic for growth—if a longer or different comparison period is preferred, adjust the date intervals accordingly.
|
|
||||||
17. **Potential NULLs in Aggregated Time-Based Metrics**
|
|
||||||
|
|
||||||
In the brand time-based metrics query, aggregate expressions such as `SUM(o.quantity * o.price)` aren't wrapped with COALESCE.
|
|
||||||
|
|
||||||
|
|
||||||
- **Impact:** If there are no orders for a given brand/month, these sums might return NULL rather than 0, which could propagate into the frontend display.
|
|
||||||
- **Fix:** Wrap such aggregates in COALESCE (e.g. `COALESCE(SUM(o.quantity * o.price), 0)`) to ensure a default numeric value.
|
|
||||||
|
|
||||||
18. **Grouping by Category Status in Base Metrics Insert**
|
|
||||||
- **Problem:** The INSERT for base category metrics groups by both `c.cat_id` and `c.status` even though the table's primary key is just `category_id`.
|
|
||||||
- **Effect:** If a category's status changes over time, the grouping may produce unexpected updates (or even multiple groups before the duplicate key update kicks in), possibly causing the wrong status or aggregated figures to be stored.
|
|
||||||
- **Example:** A category that toggles between "active" and "inactive" might have its metrics calculated differently on different runs.
|
|
||||||
- **Fix:** Ensure that the grouping keys match the primary key (or that the status update logic is exactly as intended) so that a single row per category is maintained.
|
|
||||||
19. **Potential Null Handling in Margin Calculations**
|
|
||||||
- **Problem:** In the query for category time metrics, the calculation of average margin uses expressions such as `SUM(o.quantity * (o.price - GREATEST(p.cost_price, 0)))` without using `COALESCE` on `p.cost_price`.
|
|
||||||
- **Effect:** If any product's `cost_price` is `NULL`, then `GREATEST(p.cost_price, 0)` returns `NULL` and the resulting sum (and thus the margin) could become `NULL` rather than defaulting to 0. This might lead to missing or misleading margin figures on the frontend.
|
|
||||||
- **Example:** A product with a missing cost price would make the entire margin expression evaluate to `NULL` even when sales exist.
|
|
||||||
- **Fix:** Replace `GREATEST(p.cost_price, 0)` with `GREATEST(COALESCE(p.cost_price, 0), 0)` (or simply use `COALESCE(p.cost_price, 0)`) to ensure that missing values are handled.
|
|
||||||
20. **Data Coverage in Growth Rate Calculation**
|
|
||||||
- **Problem:** The growth rate update depends on multiple CTEs (current period, previous period, and trend analysis) that require a minimum amount of data (for instance, `HAVING COUNT(*) >= 6` in the trend_stats CTE).
|
|
||||||
- **Effect:** Categories with insufficient historical data will fall into the "ELSE" branch (or may even be skipped if no revenue is present), which might result in a growth rate of 0.0 or an unexpected value.
|
|
||||||
- **Example:** A newly created category that has only two months of data won't have trend analysis, so its growth rate will be calculated solely by the simple difference, which might not reflect true performance.
|
|
||||||
- **Fix:** Confirm that this fallback behavior is acceptable for production; if not, adjust the logic so that every category receives a consistent growth rate even with sparse data.
|
|
||||||
21. **Omission of Forecasts for Zero–Sales Categories**
|
|
||||||
- **Observation:** The category–sales metrics query uses a `HAVING AVG(cs.daily_quantity) > 0` clause.
|
|
||||||
- **Effect:** Categories without any average daily sales will not receive a forecast record in `category_sales_metrics`. If the frontend expects a row (even with zeros) for every category, this will lead to missing data.
|
|
||||||
- **Fix:** Verify that it's acceptable for categories with no sales to have no forecast entry. If not, adjust the query so that a default forecast (with zeros) is inserted.
|
|
||||||
|
|
||||||
22. **Randomness in Category-Level Forecast Revenue Calculation**
|
|
||||||
- **Problem:** In the category-level forecasts query, the forecast revenue is multiplied by a factor of `(0.95 + (RAND() * 0.1))`.
|
|
||||||
- **Effect:** This introduces randomness into the forecast figures so that repeated runs could yield slightly different values. If deterministic forecasts are expected on the production frontend, this could lead to inconsistent displays.
|
|
||||||
- **Example:** The same category might show a 5% higher forecast on one run and 3% on another because of the random multiplier.
|
|
||||||
- **Fix:** Confirm that this randomness is intentional for your forecasting model; if forecasts are meant to be reproducible, remove or replace the `RAND()` factor with a fixed multiplier.
|
|
||||||
23. **Multi-Statement Cleanup of Temporary Tables**
|
|
||||||
- **Problem:** The cleanup query drops multiple temporary tables in one call (separated by semicolons).
|
|
||||||
- **Effect:** If your Node.js MySQL driver isn't configured to allow multi-statement execution, this query may fail, leaving temporary tables behind. Leftover temporary tables might eventually cause conflicts or resource issues.
|
|
||||||
- **Example:** Running the cleanup query could produce an error like "multi-statement queries not enabled," preventing proper cleanup.
|
|
||||||
- **Fix:** Either configure your database connection to allow multi-statements or issue separate queries for each temporary table drop to ensure that the cleanup runs successfully.
|
|
||||||
24. **Handling Products with No Sales Data**
|
|
||||||
- **Problem:** In the product-level forecast calculation, the CTE `daily_stats` includes a `HAVING AVG(ds.daily_quantity) > 0` clause.
|
|
||||||
- **Effect:** Products that have no sales (or a zero average daily quantity) will be excluded from the forecasts. This means the frontend won't show forecasts for non–selling products, which might be acceptable but could also be a completeness issue.
|
|
||||||
- **Example:** A product that has never sold will not appear in the `sales_forecasts` table.
|
|
||||||
- **Fix:** Confirm that it is intended for forecasts to be generated only for products with some sales activity. If forecasts are required for all products, adjust the query to insert default forecast records for products with zero sales.
|
|
||||||
25. **Complexity of the Forecast Formula Involving the Seasonality Factor**
|
|
||||||
- **Issue:**
|
|
||||||
|
|
||||||
The sales forecast calculations incorporate an adjustment factor using `COALESCE(sf.seasonality_factor, 0)` to modify forecast units and revenue. This means that if the seasonality data is missing (or not populated), the factor defaults to 0.
|
|
||||||
|
|
||||||
|
|
||||||
- **Potential Problem:**
|
|
||||||
|
|
||||||
A default value of 0 will drastically alter the forecast calculations—often leading to a forecast of 0 or an overly dampened forecast—when in reality the intended behavior might be to use a neutral multiplier (typically 1.0). This could result in forecasts that are not reflective of the actual seasonal impact, thereby skewing the figures that reach the frontend.
|
|
||||||
|
|
||||||
|
|
||||||
- **Fix:**
|
|
||||||
|
|
||||||
Review your data source for seasonality (the `sales_seasonality` table) and ensure it's consistently populated. Alternatively, if missing seasonality data is possible, consider using a more neutral default (such as 1.0) in your COALESCE. This change would prevent the forecast formulas from over-simplifying (or even nullifying) the forecast output due to missing seasonality factors.
|
|
||||||
|
|
||||||
|
|
||||||
26. **Group By with Seasonality Factor Variability**
|
|
||||||
- **Observation:** In the forecast insertion query, the GROUP BY clause includes `sf.seasonality_factor` along with other fields.
|
|
||||||
- **Effect:** If the seasonality factor differs (or is `NULL` versus a value) for different forecast dates, this might result in multiple rows for the same product and forecast date. However, the `ON DUPLICATE KEY UPDATE` clause will merge them—but only if the primary key (pid, forecast_date) is truly unique.
|
|
||||||
- **Fix:** Verify that the grouping produces exactly one row per product per forecast date. If there's potential for multiple rows due to seasonality variability, consider applying a COALESCE or an aggregation on the seasonality factor so that it does not affect grouping.
|
|
||||||
|
|
||||||
27. **Memory Management for Temporary Tables** [RESOLVED - calculate-metrics.js]
|
|
||||||
- **Problem:** In metrics calculations, temporary tables aren't always properly cleaned up if the process fails between creation and the DROP statement.
|
|
||||||
- **Effect:** If a process fails after creating temporary tables but before dropping them, these tables remain in memory until the connection is closed. In a production environment with multiple calculation runs, this could lead to memory leaks or table name conflicts.
|
|
||||||
- **Example:** The `temp_revenue_ranks` table creation in ABC classification could remain if the process fails before reaching the DROP statement.
|
|
||||||
- **Fix:** Implement proper cleanup in a finally block or use transaction management that ensures temporary tables are always cleaned up, even in failure scenarios.
|
|
||||||
@@ -102,17 +102,19 @@ CREATE TABLE IF NOT EXISTS product_time_aggregates (
|
|||||||
INDEX idx_date (year, month)
|
INDEX idx_date (year, month)
|
||||||
);
|
);
|
||||||
|
|
||||||
-- Create vendor_details table
|
-- Create vendor details table
|
||||||
CREATE TABLE vendor_details (
|
CREATE TABLE IF NOT EXISTS vendor_details (
|
||||||
vendor VARCHAR(100) PRIMARY KEY,
|
vendor VARCHAR(100) NOT NULL,
|
||||||
contact_name VARCHAR(100),
|
contact_name VARCHAR(100),
|
||||||
email VARCHAR(255),
|
email VARCHAR(100),
|
||||||
phone VARCHAR(50),
|
phone VARCHAR(20),
|
||||||
status VARCHAR(20) DEFAULT 'active',
|
status VARCHAR(20) DEFAULT 'active',
|
||||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
notes TEXT,
|
||||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
INDEX idx_status (status)
|
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
) ENGINE=InnoDB;
|
PRIMARY KEY (vendor),
|
||||||
|
INDEX idx_vendor_status (status)
|
||||||
|
);
|
||||||
|
|
||||||
-- New table for vendor metrics
|
-- New table for vendor metrics
|
||||||
CREATE TABLE IF NOT EXISTS vendor_metrics (
|
CREATE TABLE IF NOT EXISTS vendor_metrics (
|
||||||
@@ -124,13 +126,13 @@ CREATE TABLE IF NOT EXISTS vendor_metrics (
|
|||||||
order_fill_rate DECIMAL(5,2),
|
order_fill_rate DECIMAL(5,2),
|
||||||
total_orders INT DEFAULT 0,
|
total_orders INT DEFAULT 0,
|
||||||
total_late_orders INT DEFAULT 0,
|
total_late_orders INT DEFAULT 0,
|
||||||
total_purchase_value DECIMAL(10,3) DEFAULT 0,
|
total_purchase_value DECIMAL(15,3) DEFAULT 0,
|
||||||
avg_order_value DECIMAL(10,3),
|
avg_order_value DECIMAL(15,3),
|
||||||
-- Product metrics
|
-- Product metrics
|
||||||
active_products INT DEFAULT 0,
|
active_products INT DEFAULT 0,
|
||||||
total_products INT DEFAULT 0,
|
total_products INT DEFAULT 0,
|
||||||
-- Financial metrics
|
-- Financial metrics
|
||||||
total_revenue DECIMAL(10,3) DEFAULT 0,
|
total_revenue DECIMAL(15,3) DEFAULT 0,
|
||||||
avg_margin_percent DECIMAL(5,2),
|
avg_margin_percent DECIMAL(5,2),
|
||||||
-- Status
|
-- Status
|
||||||
status VARCHAR(20) DEFAULT 'active',
|
status VARCHAR(20) DEFAULT 'active',
|
||||||
@@ -408,4 +410,21 @@ LEFT JOIN
|
|||||||
category_metrics cm ON c.cat_id = cm.category_id;
|
category_metrics cm ON c.cat_id = cm.category_id;
|
||||||
|
|
||||||
-- Re-enable foreign key checks
|
-- Re-enable foreign key checks
|
||||||
SET FOREIGN_KEY_CHECKS = 1;
|
SET FOREIGN_KEY_CHECKS = 1;
|
||||||
|
|
||||||
|
-- Create table for sales seasonality factors
|
||||||
|
CREATE TABLE IF NOT EXISTS sales_seasonality (
|
||||||
|
month INT NOT NULL,
|
||||||
|
seasonality_factor DECIMAL(5,3) DEFAULT 0,
|
||||||
|
last_updated TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
PRIMARY KEY (month),
|
||||||
|
CHECK (month BETWEEN 1 AND 12),
|
||||||
|
CHECK (seasonality_factor BETWEEN -1.0 AND 1.0)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Insert default seasonality factors (neutral)
|
||||||
|
INSERT INTO sales_seasonality (month, seasonality_factor)
|
||||||
|
VALUES
|
||||||
|
(1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0),
|
||||||
|
(7, 0), (8, 0), (9, 0), (10, 0), (11, 0), (12, 0)
|
||||||
|
ON DUPLICATE KEY UPDATE last_updated = CURRENT_TIMESTAMP;
|
||||||
@@ -79,6 +79,18 @@ CREATE TABLE categories (
|
|||||||
INDEX idx_name_type (name, type)
|
INDEX idx_name_type (name, type)
|
||||||
) ENGINE=InnoDB;
|
) ENGINE=InnoDB;
|
||||||
|
|
||||||
|
-- Create vendor_details table
|
||||||
|
CREATE TABLE vendor_details (
|
||||||
|
vendor VARCHAR(100) PRIMARY KEY,
|
||||||
|
contact_name VARCHAR(100),
|
||||||
|
email VARCHAR(255),
|
||||||
|
phone VARCHAR(50),
|
||||||
|
status VARCHAR(20) DEFAULT 'active',
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
INDEX idx_status (status)
|
||||||
|
) ENGINE=InnoDB;
|
||||||
|
|
||||||
-- Create product_categories junction table
|
-- Create product_categories junction table
|
||||||
CREATE TABLE product_categories (
|
CREATE TABLE product_categories (
|
||||||
cat_id BIGINT NOT NULL,
|
cat_id BIGINT NOT NULL,
|
||||||
|
|||||||
@@ -44,34 +44,6 @@ global.clearProgress = progress.clearProgress;
|
|||||||
global.getProgress = progress.getProgress;
|
global.getProgress = progress.getProgress;
|
||||||
global.logError = progress.logError;
|
global.logError = progress.logError;
|
||||||
|
|
||||||
// List of temporary tables used in the calculation process
|
|
||||||
const TEMP_TABLES = [
|
|
||||||
'temp_revenue_ranks',
|
|
||||||
'temp_sales_metrics',
|
|
||||||
'temp_purchase_metrics',
|
|
||||||
'temp_product_metrics',
|
|
||||||
'temp_vendor_metrics',
|
|
||||||
'temp_category_metrics',
|
|
||||||
'temp_brand_metrics',
|
|
||||||
'temp_forecast_dates',
|
|
||||||
'temp_daily_sales',
|
|
||||||
'temp_product_stats',
|
|
||||||
'temp_category_sales',
|
|
||||||
'temp_category_stats'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Add cleanup function for temporary tables
|
|
||||||
async function cleanupTemporaryTables(connection) {
|
|
||||||
try {
|
|
||||||
for (const table of TEMP_TABLES) {
|
|
||||||
await connection.query(`DROP TEMPORARY TABLE IF EXISTS ${table}`);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
logError(error, 'Error cleaning up temporary tables');
|
|
||||||
throw error; // Re-throw to be handled by the caller
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const { getConnection, closePool } = require('./metrics/utils/db');
|
const { getConnection, closePool } = require('./metrics/utils/db');
|
||||||
const calculateProductMetrics = require('./metrics/product-metrics');
|
const calculateProductMetrics = require('./metrics/product-metrics');
|
||||||
const calculateTimeAggregates = require('./metrics/time-aggregates');
|
const calculateTimeAggregates = require('./metrics/time-aggregates');
|
||||||
@@ -111,6 +83,7 @@ process.on('SIGTERM', cancelCalculation);
|
|||||||
async function calculateMetrics() {
|
async function calculateMetrics() {
|
||||||
let connection;
|
let connection;
|
||||||
const startTime = Date.now();
|
const startTime = Date.now();
|
||||||
|
// Initialize all counts to 0
|
||||||
let processedProducts = 0;
|
let processedProducts = 0;
|
||||||
let processedOrders = 0;
|
let processedOrders = 0;
|
||||||
let processedPurchaseOrders = 0;
|
let processedPurchaseOrders = 0;
|
||||||
@@ -118,7 +91,7 @@ async function calculateMetrics() {
|
|||||||
let totalOrders = 0;
|
let totalOrders = 0;
|
||||||
let totalPurchaseOrders = 0;
|
let totalPurchaseOrders = 0;
|
||||||
let calculateHistoryId;
|
let calculateHistoryId;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Clean up any previously running calculations
|
// Clean up any previously running calculations
|
||||||
connection = await getConnection();
|
connection = await getConnection();
|
||||||
@@ -132,18 +105,57 @@ async function calculateMetrics() {
|
|||||||
WHERE status = 'running'
|
WHERE status = 'running'
|
||||||
`);
|
`);
|
||||||
|
|
||||||
// Get counts from all relevant tables
|
// Get counts of records that need updating based on last calculation time
|
||||||
const [[productCount], [orderCount], [poCount]] = await Promise.all([
|
const [[productCount], [orderCount], [poCount]] = await Promise.all([
|
||||||
connection.query('SELECT COUNT(*) as total FROM products'),
|
connection.query(`
|
||||||
connection.query('SELECT COUNT(*) as total FROM orders'),
|
SELECT COUNT(DISTINCT p.pid) as total
|
||||||
connection.query('SELECT COUNT(*) as total FROM purchase_orders')
|
FROM products p
|
||||||
|
FORCE INDEX (PRIMARY)
|
||||||
|
LEFT JOIN calculate_status cs ON cs.module_name = 'product_metrics'
|
||||||
|
LEFT JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid
|
||||||
|
AND o.updated > COALESCE(cs.last_calculation_timestamp, '1970-01-01')
|
||||||
|
AND o.canceled = false
|
||||||
|
LEFT JOIN purchase_orders po FORCE INDEX (idx_purchase_orders_metrics) ON p.pid = po.pid
|
||||||
|
AND po.updated > COALESCE(cs.last_calculation_timestamp, '1970-01-01')
|
||||||
|
WHERE p.updated > COALESCE(cs.last_calculation_timestamp, '1970-01-01')
|
||||||
|
OR o.pid IS NOT NULL
|
||||||
|
OR po.pid IS NOT NULL
|
||||||
|
`),
|
||||||
|
connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT o.id) as total
|
||||||
|
FROM orders o
|
||||||
|
FORCE INDEX (idx_orders_metrics)
|
||||||
|
LEFT JOIN calculate_status cs ON cs.module_name = 'product_metrics'
|
||||||
|
WHERE o.updated > COALESCE(cs.last_calculation_timestamp, '1970-01-01')
|
||||||
|
AND o.canceled = false
|
||||||
|
`),
|
||||||
|
connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT po.id) as total
|
||||||
|
FROM purchase_orders po
|
||||||
|
FORCE INDEX (idx_purchase_orders_metrics)
|
||||||
|
LEFT JOIN calculate_status cs ON cs.module_name = 'product_metrics'
|
||||||
|
WHERE po.updated > COALESCE(cs.last_calculation_timestamp, '1970-01-01')
|
||||||
|
`)
|
||||||
]);
|
]);
|
||||||
|
|
||||||
totalProducts = productCount.total;
|
totalProducts = productCount.total;
|
||||||
totalOrders = orderCount.total;
|
totalOrders = orderCount.total;
|
||||||
totalPurchaseOrders = poCount.total;
|
totalPurchaseOrders = poCount.total;
|
||||||
|
connection.release();
|
||||||
|
|
||||||
|
// If nothing needs updating, we can exit early
|
||||||
|
if (totalProducts === 0 && totalOrders === 0 && totalPurchaseOrders === 0) {
|
||||||
|
console.log('No records need updating');
|
||||||
|
return {
|
||||||
|
processedProducts: 0,
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Create history record for this calculation
|
// Create history record for this calculation
|
||||||
|
connection = await getConnection(); // Re-establish connection
|
||||||
const [historyResult] = await connection.query(`
|
const [historyResult] = await connection.query(`
|
||||||
INSERT INTO calculate_history (
|
INSERT INTO calculate_history (
|
||||||
start_time,
|
start_time,
|
||||||
@@ -174,7 +186,7 @@ async function calculateMetrics() {
|
|||||||
totalPurchaseOrders,
|
totalPurchaseOrders,
|
||||||
SKIP_PRODUCT_METRICS,
|
SKIP_PRODUCT_METRICS,
|
||||||
SKIP_TIME_AGGREGATES,
|
SKIP_TIME_AGGREGATES,
|
||||||
SKIP_FINANCIAL_METRICS,
|
SKIP_FINANCIAL_METRICS,
|
||||||
SKIP_VENDOR_METRICS,
|
SKIP_VENDOR_METRICS,
|
||||||
SKIP_CATEGORY_METRICS,
|
SKIP_CATEGORY_METRICS,
|
||||||
SKIP_BRAND_METRICS,
|
SKIP_BRAND_METRICS,
|
||||||
@@ -200,7 +212,7 @@ async function calculateMetrics() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
isCancelled = false;
|
isCancelled = false;
|
||||||
connection = await getConnection();
|
connection = await getConnection(); // Get a new connection for the main processing
|
||||||
|
|
||||||
try {
|
try {
|
||||||
global.outputProgress({
|
global.outputProgress({
|
||||||
@@ -219,14 +231,12 @@ async function calculateMetrics() {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Update progress periodically
|
// Update progress periodically - REFACTORED
|
||||||
const updateProgress = async (products = null, orders = null, purchaseOrders = null) => {
|
const updateProgress = async (products = null, orders = null, purchaseOrders = null) => {
|
||||||
// Ensure all values are valid numbers or default to previous value
|
|
||||||
if (products !== null) processedProducts = Number(products) || processedProducts || 0;
|
if (products !== null) processedProducts = Number(products) || processedProducts || 0;
|
||||||
if (orders !== null) processedOrders = Number(orders) || processedOrders || 0;
|
if (orders !== null) processedOrders = Number(orders) || processedOrders || 0;
|
||||||
if (purchaseOrders !== null) processedPurchaseOrders = Number(purchaseOrders) || processedPurchaseOrders || 0;
|
if (purchaseOrders !== null) processedPurchaseOrders = Number(purchaseOrders) || processedPurchaseOrders || 0;
|
||||||
|
|
||||||
// Ensure we never send NaN to the database
|
|
||||||
const safeProducts = Number(processedProducts) || 0;
|
const safeProducts = Number(processedProducts) || 0;
|
||||||
const safeOrders = Number(processedOrders) || 0;
|
const safeOrders = Number(processedOrders) || 0;
|
||||||
const safePurchaseOrders = Number(processedPurchaseOrders) || 0;
|
const safePurchaseOrders = Number(processedPurchaseOrders) || 0;
|
||||||
@@ -241,14 +251,14 @@ async function calculateMetrics() {
|
|||||||
`, [safeProducts, safeOrders, safePurchaseOrders, calculateHistoryId]);
|
`, [safeProducts, safeOrders, safePurchaseOrders, calculateHistoryId]);
|
||||||
};
|
};
|
||||||
|
|
||||||
// Helper function to ensure valid progress numbers
|
// Helper function to ensure valid progress numbers - this is fine
|
||||||
const ensureValidProgress = (current, total) => ({
|
const ensureValidProgress = (current, total) => ({
|
||||||
current: Number(current) || 0,
|
current: Number(current) || 0,
|
||||||
total: Number(total) || 1, // Default to 1 to avoid division by zero
|
total: Number(total) || 1, // Default to 1 to avoid division by zero
|
||||||
percentage: (((Number(current) || 0) / (Number(total) || 1)) * 100).toFixed(1)
|
percentage: (((Number(current) || 0) / (Number(total) || 1)) * 100).toFixed(1)
|
||||||
});
|
});
|
||||||
|
|
||||||
// Initial progress
|
// Initial progress - this is fine
|
||||||
const initialProgress = ensureValidProgress(0, totalProducts);
|
const initialProgress = ensureValidProgress(0, totalProducts);
|
||||||
global.outputProgress({
|
global.outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
@@ -266,37 +276,28 @@ async function calculateMetrics() {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// --- Call each module, passing totals and accumulating processed counts ---
|
||||||
|
|
||||||
if (!SKIP_PRODUCT_METRICS) {
|
if (!SKIP_PRODUCT_METRICS) {
|
||||||
const result = await calculateProductMetrics(startTime, totalProducts);
|
const result = await calculateProductMetrics(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders); // Update with accumulated values
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Product metrics calculation failed');
|
throw new Error('Product metrics calculation failed');
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
console.log('Skipping product metrics calculation...');
|
console.log('Skipping product metrics calculation...');
|
||||||
processedProducts = Math.floor(totalProducts * 0.6);
|
// Don't artificially inflate processedProducts if skipping
|
||||||
await updateProgress(processedProducts);
|
|
||||||
global.outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Skipping product metrics calculation',
|
|
||||||
current: processedProducts,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: global.formatElapsedTime(startTime),
|
|
||||||
remaining: global.estimateRemaining(startTime, processedProducts, totalProducts),
|
|
||||||
rate: global.calculateRate(startTime, processedProducts),
|
|
||||||
percentage: '60',
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate time-based aggregates
|
|
||||||
if (!SKIP_TIME_AGGREGATES) {
|
if (!SKIP_TIME_AGGREGATES) {
|
||||||
const result = await calculateTimeAggregates(startTime, totalProducts, processedProducts);
|
const result = await calculateTimeAggregates(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Time aggregates calculation failed');
|
throw new Error('Time aggregates calculation failed');
|
||||||
}
|
}
|
||||||
@@ -304,21 +305,25 @@ async function calculateMetrics() {
|
|||||||
console.log('Skipping time aggregates calculation');
|
console.log('Skipping time aggregates calculation');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate financial metrics
|
|
||||||
if (!SKIP_FINANCIAL_METRICS) {
|
if (!SKIP_FINANCIAL_METRICS) {
|
||||||
const result = await calculateFinancialMetrics(startTime, totalProducts, processedProducts);
|
const result = await calculateFinancialMetrics(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Financial metrics calculation failed');
|
throw new Error('Financial metrics calculation failed');
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
console.log('Skipping financial metrics calculation');
|
console.log('Skipping financial metrics calculation');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate vendor metrics
|
|
||||||
if (!SKIP_VENDOR_METRICS) {
|
if (!SKIP_VENDOR_METRICS) {
|
||||||
const result = await calculateVendorMetrics(startTime, totalProducts, processedProducts);
|
const result = await calculateVendorMetrics(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Vendor metrics calculation failed');
|
throw new Error('Vendor metrics calculation failed');
|
||||||
}
|
}
|
||||||
@@ -326,10 +331,12 @@ async function calculateMetrics() {
|
|||||||
console.log('Skipping vendor metrics calculation');
|
console.log('Skipping vendor metrics calculation');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate category metrics
|
|
||||||
if (!SKIP_CATEGORY_METRICS) {
|
if (!SKIP_CATEGORY_METRICS) {
|
||||||
const result = await calculateCategoryMetrics(startTime, totalProducts, processedProducts);
|
const result = await calculateCategoryMetrics(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Category metrics calculation failed');
|
throw new Error('Category metrics calculation failed');
|
||||||
}
|
}
|
||||||
@@ -337,10 +344,12 @@ async function calculateMetrics() {
|
|||||||
console.log('Skipping category metrics calculation');
|
console.log('Skipping category metrics calculation');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate brand metrics
|
|
||||||
if (!SKIP_BRAND_METRICS) {
|
if (!SKIP_BRAND_METRICS) {
|
||||||
const result = await calculateBrandMetrics(startTime, totalProducts, processedProducts);
|
const result = await calculateBrandMetrics(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Brand metrics calculation failed');
|
throw new Error('Brand metrics calculation failed');
|
||||||
}
|
}
|
||||||
@@ -348,10 +357,12 @@ async function calculateMetrics() {
|
|||||||
console.log('Skipping brand metrics calculation');
|
console.log('Skipping brand metrics calculation');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate sales forecasts
|
|
||||||
if (!SKIP_SALES_FORECASTS) {
|
if (!SKIP_SALES_FORECASTS) {
|
||||||
const result = await calculateSalesForecasts(startTime, totalProducts, processedProducts);
|
const result = await calculateSalesForecasts(startTime, totalProducts, processedProducts, isCancelled); // Pass totals
|
||||||
await updateProgress(result.processedProducts, result.processedOrders, result.processedPurchaseOrders);
|
processedProducts += result.processedProducts; // Accumulate
|
||||||
|
processedOrders += result.processedOrders;
|
||||||
|
processedPurchaseOrders += result.processedPurchaseOrders;
|
||||||
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
if (!result.success) {
|
if (!result.success) {
|
||||||
throw new Error('Sales forecasts calculation failed');
|
throw new Error('Sales forecasts calculation failed');
|
||||||
}
|
}
|
||||||
@@ -359,23 +370,7 @@ async function calculateMetrics() {
|
|||||||
console.log('Skipping sales forecasts calculation');
|
console.log('Skipping sales forecasts calculation');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate ABC classification
|
// --- ABC Classification (Refactored) ---
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Starting ABC classification',
|
|
||||||
current: processedProducts || 0,
|
|
||||||
total: totalProducts || 0,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedProducts || 0, totalProducts || 0),
|
|
||||||
rate: calculateRate(startTime, processedProducts || 0),
|
|
||||||
percentage: (((processedProducts || 0) / (totalProducts || 1)) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
if (isCancelled) return {
|
||||||
processedProducts: processedProducts || 0,
|
processedProducts: processedProducts || 0,
|
||||||
processedOrders: processedOrders || 0,
|
processedOrders: processedOrders || 0,
|
||||||
@@ -393,21 +388,26 @@ async function calculateMetrics() {
|
|||||||
pid BIGINT NOT NULL,
|
pid BIGINT NOT NULL,
|
||||||
total_revenue DECIMAL(10,3),
|
total_revenue DECIMAL(10,3),
|
||||||
rank_num INT,
|
rank_num INT,
|
||||||
|
dense_rank_num INT,
|
||||||
|
percentile DECIMAL(5,2),
|
||||||
total_count INT,
|
total_count INT,
|
||||||
PRIMARY KEY (pid),
|
PRIMARY KEY (pid),
|
||||||
INDEX (rank_num)
|
INDEX (rank_num),
|
||||||
|
INDEX (dense_rank_num),
|
||||||
|
INDEX (percentile)
|
||||||
) ENGINE=MEMORY
|
) ENGINE=MEMORY
|
||||||
`);
|
`);
|
||||||
|
|
||||||
outputProgress({
|
let processedCount = processedProducts;
|
||||||
|
global.outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Creating revenue rankings',
|
operation: 'Creating revenue rankings',
|
||||||
current: processedProducts || 0,
|
current: processedCount,
|
||||||
total: totalProducts || 0,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: global.formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedProducts || 0, totalProducts || 0),
|
remaining: global.estimateRemaining(startTime, processedCount, totalProducts),
|
||||||
rate: calculateRate(startTime, processedProducts || 0),
|
rate: global.calculateRate(startTime, processedCount),
|
||||||
percentage: (((processedProducts || 0) / (totalProducts || 1)) * 100).toFixed(1),
|
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -422,139 +422,75 @@ async function calculateMetrics() {
|
|||||||
success: false
|
success: false
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Calculate rankings with proper tie handling and get total count in one go.
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO temp_revenue_ranks
|
INSERT INTO temp_revenue_ranks
|
||||||
|
WITH revenue_data AS (
|
||||||
|
SELECT
|
||||||
|
pid,
|
||||||
|
total_revenue,
|
||||||
|
COUNT(*) OVER () as total_count,
|
||||||
|
PERCENT_RANK() OVER (ORDER BY total_revenue DESC) * 100 as percentile,
|
||||||
|
RANK() OVER (ORDER BY total_revenue DESC) as rank_num,
|
||||||
|
DENSE_RANK() OVER (ORDER BY total_revenue DESC) as dense_rank_num
|
||||||
|
FROM product_metrics
|
||||||
|
WHERE total_revenue > 0
|
||||||
|
)
|
||||||
SELECT
|
SELECT
|
||||||
pid,
|
pid,
|
||||||
total_revenue,
|
total_revenue,
|
||||||
@rank := @rank + 1 as rank_num,
|
rank_num,
|
||||||
@total_count := @rank as total_count
|
dense_rank_num,
|
||||||
FROM (
|
percentile,
|
||||||
SELECT pid, total_revenue
|
total_count
|
||||||
FROM product_metrics
|
FROM revenue_data
|
||||||
WHERE total_revenue > 0
|
|
||||||
ORDER BY total_revenue DESC
|
|
||||||
) ranked,
|
|
||||||
(SELECT @rank := 0) r
|
|
||||||
`);
|
`);
|
||||||
|
|
||||||
// Get total count for percentage calculation
|
// Perform ABC classification in a single UPDATE statement. This is MUCH faster.
|
||||||
const [rankingCount] = await connection.query('SELECT MAX(rank_num) as total_count FROM temp_revenue_ranks');
|
await connection.query(`
|
||||||
const totalCount = rankingCount[0].total_count || 1;
|
UPDATE product_metrics pm
|
||||||
const max_rank = totalCount; // Store max_rank for use in classification
|
LEFT JOIN temp_revenue_ranks tr ON pm.pid = tr.pid
|
||||||
|
SET pm.abc_class =
|
||||||
|
CASE
|
||||||
|
WHEN tr.pid IS NULL THEN 'C'
|
||||||
|
WHEN tr.percentile <= ? THEN 'A'
|
||||||
|
WHEN tr.percentile <= ? THEN 'B'
|
||||||
|
ELSE 'C'
|
||||||
|
END,
|
||||||
|
pm.last_calculated_at = NOW()
|
||||||
|
`, [abcThresholds.a_threshold, abcThresholds.b_threshold]);
|
||||||
|
|
||||||
outputProgress({
|
//Now update turnover rate
|
||||||
status: 'running',
|
await connection.query(`
|
||||||
operation: 'Updating ABC classifications',
|
UPDATE product_metrics pm
|
||||||
current: processedProducts || 0,
|
JOIN (
|
||||||
total: totalProducts || 0,
|
SELECT
|
||||||
elapsed: formatElapsedTime(startTime),
|
o.pid,
|
||||||
remaining: estimateRemaining(startTime, processedProducts || 0, totalProducts || 0),
|
SUM(o.quantity) as total_sold,
|
||||||
rate: calculateRate(startTime, processedProducts || 0),
|
COUNT(DISTINCT DATE(o.date)) as active_days,
|
||||||
percentage: (((processedProducts || 0) / (totalProducts || 1)) * 100).toFixed(1),
|
AVG(CASE
|
||||||
timing: {
|
WHEN p.stock_quantity > 0 THEN p.stock_quantity
|
||||||
start_time: new Date(startTime).toISOString(),
|
ELSE NULL
|
||||||
end_time: new Date().toISOString(),
|
END) as avg_nonzero_stock
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
FROM orders o
|
||||||
}
|
JOIN products p ON o.pid = p.pid
|
||||||
});
|
WHERE o.canceled = false
|
||||||
|
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY)
|
||||||
if (isCancelled) return {
|
GROUP BY o.pid
|
||||||
processedProducts: processedProducts || 0,
|
) sales ON pm.pid = sales.pid
|
||||||
processedOrders: processedOrders || 0,
|
SET
|
||||||
processedPurchaseOrders: 0,
|
pm.turnover_rate = CASE
|
||||||
success: false
|
WHEN sales.avg_nonzero_stock > 0 AND sales.active_days > 0
|
||||||
};
|
THEN LEAST(
|
||||||
|
(sales.total_sold / sales.avg_nonzero_stock) * (365.0 / sales.active_days),
|
||||||
// ABC classification progress tracking
|
999.99
|
||||||
let abcProcessedCount = 0;
|
)
|
||||||
const batchSize = 5000;
|
ELSE 0
|
||||||
let lastProgressUpdate = Date.now();
|
END,
|
||||||
const progressUpdateInterval = 1000; // Update every second
|
pm.last_calculated_at = NOW()
|
||||||
|
`);
|
||||||
while (true) {
|
processedProducts = totalProducts;
|
||||||
if (isCancelled) return {
|
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
||||||
processedProducts: Number(processedProducts) || 0,
|
|
||||||
processedOrders: Number(processedOrders) || 0,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success: false
|
|
||||||
};
|
|
||||||
|
|
||||||
// First get a batch of PIDs that need updating
|
|
||||||
const [pids] = await connection.query(`
|
|
||||||
SELECT pm.pid
|
|
||||||
FROM product_metrics pm
|
|
||||||
LEFT JOIN temp_revenue_ranks tr ON pm.pid = tr.pid
|
|
||||||
WHERE pm.abc_class IS NULL
|
|
||||||
OR pm.abc_class !=
|
|
||||||
CASE
|
|
||||||
WHEN tr.rank_num IS NULL THEN 'C'
|
|
||||||
WHEN (tr.rank_num / ?) * 100 <= ? THEN 'A'
|
|
||||||
WHEN (tr.rank_num / ?) * 100 <= ? THEN 'B'
|
|
||||||
ELSE 'C'
|
|
||||||
END
|
|
||||||
LIMIT ?
|
|
||||||
`, [max_rank, abcThresholds.a_threshold,
|
|
||||||
max_rank, abcThresholds.b_threshold,
|
|
||||||
batchSize]);
|
|
||||||
|
|
||||||
if (pids.length === 0) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Then update just those PIDs
|
|
||||||
const [result] = await connection.query(`
|
|
||||||
UPDATE product_metrics pm
|
|
||||||
LEFT JOIN temp_revenue_ranks tr ON pm.pid = tr.pid
|
|
||||||
SET pm.abc_class =
|
|
||||||
CASE
|
|
||||||
WHEN tr.rank_num IS NULL THEN 'C'
|
|
||||||
WHEN (tr.rank_num / ?) * 100 <= ? THEN 'A'
|
|
||||||
WHEN (tr.rank_num / ?) * 100 <= ? THEN 'B'
|
|
||||||
ELSE 'C'
|
|
||||||
END,
|
|
||||||
pm.last_calculated_at = NOW()
|
|
||||||
WHERE pm.pid IN (?)
|
|
||||||
`, [max_rank, abcThresholds.a_threshold,
|
|
||||||
max_rank, abcThresholds.b_threshold,
|
|
||||||
pids.map(row => row.pid)]);
|
|
||||||
|
|
||||||
abcProcessedCount += result.affectedRows;
|
|
||||||
|
|
||||||
// Calculate progress ensuring valid numbers
|
|
||||||
const currentProgress = Math.floor(totalProducts * (0.99 + (abcProcessedCount / (totalCount || 1)) * 0.01));
|
|
||||||
processedProducts = Number(currentProgress) || processedProducts || 0;
|
|
||||||
|
|
||||||
// Only update progress at most once per second
|
|
||||||
const now = Date.now();
|
|
||||||
if (now - lastProgressUpdate >= progressUpdateInterval) {
|
|
||||||
const progress = ensureValidProgress(processedProducts, totalProducts);
|
|
||||||
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'ABC classification progress',
|
|
||||||
current: progress.current,
|
|
||||||
total: progress.total,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, progress.current, progress.total),
|
|
||||||
rate: calculateRate(startTime, progress.current),
|
|
||||||
percentage: progress.percentage,
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
lastProgressUpdate = now;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update database progress
|
|
||||||
await updateProgress(processedProducts, processedOrders, processedPurchaseOrders);
|
|
||||||
|
|
||||||
// Small delay between batches to allow other transactions
|
|
||||||
await new Promise(resolve => setTimeout(resolve, 100));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clean up
|
// Clean up
|
||||||
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_revenue_ranks');
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_revenue_ranks');
|
||||||
@@ -573,14 +509,14 @@ async function calculateMetrics() {
|
|||||||
const finalProgress = ensureValidProgress(totalProducts, totalProducts);
|
const finalProgress = ensureValidProgress(totalProducts, totalProducts);
|
||||||
|
|
||||||
// Final success message
|
// Final success message
|
||||||
outputProgress({
|
global.outputProgress({
|
||||||
status: 'complete',
|
status: 'complete',
|
||||||
operation: 'Metrics calculation complete',
|
operation: 'Metrics calculation complete',
|
||||||
current: finalProgress.current,
|
current: finalProgress.current,
|
||||||
total: finalProgress.total,
|
total: finalProgress.total,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: global.formatElapsedTime(startTime),
|
||||||
remaining: '0s',
|
remaining: '0s',
|
||||||
rate: calculateRate(startTime, finalProgress.current),
|
rate: global.calculateRate(startTime, finalProgress.current),
|
||||||
percentage: '100',
|
percentage: '100',
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
@@ -678,26 +614,19 @@ async function calculateMetrics() {
|
|||||||
throw error;
|
throw error;
|
||||||
} finally {
|
} finally {
|
||||||
if (connection) {
|
if (connection) {
|
||||||
// Ensure temporary tables are cleaned up
|
|
||||||
await cleanupTemporaryTables(connection);
|
|
||||||
connection.release();
|
connection.release();
|
||||||
}
|
}
|
||||||
// Close the connection pool when we're done
|
|
||||||
await closePool();
|
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} finally {
|
||||||
success = false;
|
// Close the connection pool when we're done
|
||||||
logError(error, 'Error in metrics calculation');
|
await closePool();
|
||||||
throw error;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Export as a module with all necessary functions
|
// Export both functions and progress checker
|
||||||
module.exports = {
|
module.exports = calculateMetrics;
|
||||||
calculateMetrics,
|
module.exports.cancelCalculation = cancelCalculation;
|
||||||
cancelCalculation,
|
module.exports.getProgress = global.getProgress;
|
||||||
getProgress: global.getProgress
|
|
||||||
};
|
|
||||||
|
|
||||||
// Run directly if called from command line
|
// Run directly if called from command line
|
||||||
if (require.main === module) {
|
if (require.main === module) {
|
||||||
|
|||||||
@@ -1,107 +0,0 @@
|
|||||||
const path = require('path');
|
|
||||||
const { spawn } = require('child_process');
|
|
||||||
|
|
||||||
function outputProgress(data) {
|
|
||||||
if (!data.status) {
|
|
||||||
data = {
|
|
||||||
status: 'running',
|
|
||||||
...data
|
|
||||||
};
|
|
||||||
}
|
|
||||||
console.log(JSON.stringify(data));
|
|
||||||
}
|
|
||||||
|
|
||||||
function runScript(scriptPath) {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
const child = spawn('node', [scriptPath], {
|
|
||||||
stdio: ['inherit', 'pipe', 'pipe']
|
|
||||||
});
|
|
||||||
|
|
||||||
let output = '';
|
|
||||||
|
|
||||||
child.stdout.on('data', (data) => {
|
|
||||||
const lines = data.toString().split('\n');
|
|
||||||
lines.filter(line => line.trim()).forEach(line => {
|
|
||||||
try {
|
|
||||||
console.log(line); // Pass through the JSON output
|
|
||||||
output += line + '\n';
|
|
||||||
} catch (e) {
|
|
||||||
console.log(line); // If not JSON, just log it directly
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
child.stderr.on('data', (data) => {
|
|
||||||
console.error(data.toString());
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on('close', (code) => {
|
|
||||||
if (code !== 0) {
|
|
||||||
reject(new Error(`Script ${scriptPath} exited with code ${code}`));
|
|
||||||
} else {
|
|
||||||
resolve(output);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on('error', (err) => {
|
|
||||||
reject(err);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
async function fullReset() {
|
|
||||||
try {
|
|
||||||
// Step 1: Reset Database
|
|
||||||
outputProgress({
|
|
||||||
operation: 'Starting full reset',
|
|
||||||
message: 'Step 1/3: Resetting database...'
|
|
||||||
});
|
|
||||||
await runScript(path.join(__dirname, 'reset-db.js'));
|
|
||||||
outputProgress({
|
|
||||||
status: 'complete',
|
|
||||||
operation: 'Database reset step complete',
|
|
||||||
message: 'Database reset finished, moving to import...'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Step 2: Import from Production
|
|
||||||
outputProgress({
|
|
||||||
operation: 'Starting import',
|
|
||||||
message: 'Step 2/3: Importing from production...'
|
|
||||||
});
|
|
||||||
await runScript(path.join(__dirname, 'import-from-prod.js'));
|
|
||||||
outputProgress({
|
|
||||||
status: 'complete',
|
|
||||||
operation: 'Import step complete',
|
|
||||||
message: 'Import finished, moving to metrics calculation...'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Step 3: Calculate Metrics
|
|
||||||
outputProgress({
|
|
||||||
operation: 'Starting metrics calculation',
|
|
||||||
message: 'Step 3/3: Calculating metrics...'
|
|
||||||
});
|
|
||||||
await runScript(path.join(__dirname, 'calculate-metrics.js'));
|
|
||||||
|
|
||||||
// Final completion message
|
|
||||||
outputProgress({
|
|
||||||
status: 'complete',
|
|
||||||
operation: 'Full reset complete',
|
|
||||||
message: 'Successfully completed all steps: database reset, import, and metrics calculation'
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
outputProgress({
|
|
||||||
status: 'error',
|
|
||||||
operation: 'Full reset failed',
|
|
||||||
error: error.message,
|
|
||||||
stack: error.stack
|
|
||||||
});
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run if called directly
|
|
||||||
if (require.main === module) {
|
|
||||||
fullReset();
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = fullReset;
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
const path = require('path');
|
|
||||||
const { spawn } = require('child_process');
|
|
||||||
|
|
||||||
function outputProgress(data) {
|
|
||||||
if (!data.status) {
|
|
||||||
data = {
|
|
||||||
status: 'running',
|
|
||||||
...data
|
|
||||||
};
|
|
||||||
}
|
|
||||||
console.log(JSON.stringify(data));
|
|
||||||
}
|
|
||||||
|
|
||||||
function runScript(scriptPath) {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
const child = spawn('node', [scriptPath], {
|
|
||||||
stdio: ['inherit', 'pipe', 'pipe']
|
|
||||||
});
|
|
||||||
|
|
||||||
let output = '';
|
|
||||||
|
|
||||||
child.stdout.on('data', (data) => {
|
|
||||||
const lines = data.toString().split('\n');
|
|
||||||
lines.filter(line => line.trim()).forEach(line => {
|
|
||||||
try {
|
|
||||||
console.log(line); // Pass through the JSON output
|
|
||||||
output += line + '\n';
|
|
||||||
} catch (e) {
|
|
||||||
console.log(line); // If not JSON, just log it directly
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
child.stderr.on('data', (data) => {
|
|
||||||
console.error(data.toString());
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on('close', (code) => {
|
|
||||||
if (code !== 0) {
|
|
||||||
reject(new Error(`Script ${scriptPath} exited with code ${code}`));
|
|
||||||
} else {
|
|
||||||
resolve(output);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on('error', (err) => {
|
|
||||||
reject(err);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
async function fullUpdate() {
|
|
||||||
try {
|
|
||||||
// Step 1: Import from Production
|
|
||||||
outputProgress({
|
|
||||||
operation: 'Starting full update',
|
|
||||||
message: 'Step 1/2: Importing from production...'
|
|
||||||
});
|
|
||||||
await runScript(path.join(__dirname, 'import-from-prod.js'));
|
|
||||||
outputProgress({
|
|
||||||
status: 'complete',
|
|
||||||
operation: 'Import step complete',
|
|
||||||
message: 'Import finished, moving to metrics calculation...'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Step 2: Calculate Metrics
|
|
||||||
outputProgress({
|
|
||||||
operation: 'Starting metrics calculation',
|
|
||||||
message: 'Step 2/2: Calculating metrics...'
|
|
||||||
});
|
|
||||||
await runScript(path.join(__dirname, 'calculate-metrics.js'));
|
|
||||||
outputProgress({
|
|
||||||
status: 'complete',
|
|
||||||
operation: 'Metrics step complete',
|
|
||||||
message: 'Metrics calculation finished'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Final completion message
|
|
||||||
outputProgress({
|
|
||||||
status: 'complete',
|
|
||||||
operation: 'Full update complete',
|
|
||||||
message: 'Successfully completed all steps: import and metrics calculation'
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
outputProgress({
|
|
||||||
status: 'error',
|
|
||||||
operation: 'Full update failed',
|
|
||||||
error: error.message,
|
|
||||||
stack: error.stack
|
|
||||||
});
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run if called directly
|
|
||||||
if (require.main === module) {
|
|
||||||
fullUpdate();
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = fullUpdate;
|
|
||||||
@@ -4,19 +4,51 @@ const { getConnection } = require('./utils/db');
|
|||||||
async function calculateBrandMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
async function calculateBrandMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
const BATCH_SIZE = 5000;
|
||||||
|
let myProcessedProducts = 0; // Not *directly* processing products, tracking brands
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'brand_metrics'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
// Get total count of brands needing updates
|
||||||
|
const [brandCount] = await connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT p.brand) as count
|
||||||
|
FROM products p
|
||||||
|
LEFT JOIN orders o ON p.pid = o.pid AND o.updated > ?
|
||||||
|
WHERE p.brand IS NOT NULL
|
||||||
|
AND (
|
||||||
|
p.updated > ?
|
||||||
|
OR o.id IS NOT NULL
|
||||||
|
)
|
||||||
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
const totalBrands = brandCount[0].count; // Track total *brands*
|
||||||
|
|
||||||
|
if (totalBrands === 0) {
|
||||||
|
console.log('No brands need metric updates');
|
||||||
|
return {
|
||||||
|
processedProducts: 0, // Not directly processing products
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
operation: 'Brand metrics calculation cancelled',
|
operation: 'Brand metrics calculation cancelled',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalBrands, // Report total *brands*
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: null,
|
remaining: null,
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalBrands) * 100).toFixed(1), // Base on brands
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -24,30 +56,22 @@ async function calculateBrandMetrics(startTime, totalProducts, processedCount =
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: 0, // Not directly processing products
|
||||||
processedOrders: 0,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get order count that will be processed
|
|
||||||
const [orderCount] = await connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM orders o
|
|
||||||
WHERE o.canceled = false
|
|
||||||
`);
|
|
||||||
processedOrders = orderCount[0].count;
|
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting brand metrics calculation',
|
operation: 'Starting brand metrics calculation',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalBrands, // Report total *brands*
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
remaining: estimateRemaining(startTime, processedCount, totalBrands),
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalBrands) * 100).toFixed(1), // Base on brands
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -55,241 +79,194 @@ async function calculateBrandMetrics(startTime, totalProducts, processedCount =
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Calculate brand metrics with optimized queries
|
// Process in batches
|
||||||
await connection.query(`
|
let lastBrand = '';
|
||||||
INSERT INTO brand_metrics (
|
let processedBrands = 0; // Track processed brands
|
||||||
brand,
|
while (true) {
|
||||||
product_count,
|
if (isCancelled) break;
|
||||||
active_products,
|
|
||||||
total_stock_units,
|
const [batch] = await connection.query(`
|
||||||
total_stock_cost,
|
SELECT DISTINCT p.brand
|
||||||
total_stock_retail,
|
|
||||||
total_revenue,
|
|
||||||
avg_margin,
|
|
||||||
growth_rate
|
|
||||||
)
|
|
||||||
WITH filtered_products AS (
|
|
||||||
SELECT
|
|
||||||
p.*,
|
|
||||||
CASE
|
|
||||||
WHEN p.stock_quantity <= 5000 AND p.stock_quantity >= 0
|
|
||||||
THEN p.pid
|
|
||||||
END as valid_pid,
|
|
||||||
CASE
|
|
||||||
WHEN p.visible = true
|
|
||||||
AND p.stock_quantity <= 5000
|
|
||||||
AND p.stock_quantity >= 0
|
|
||||||
THEN p.pid
|
|
||||||
END as active_pid,
|
|
||||||
CASE
|
|
||||||
WHEN p.stock_quantity IS NULL
|
|
||||||
OR p.stock_quantity < 0
|
|
||||||
OR p.stock_quantity > 5000
|
|
||||||
THEN 0
|
|
||||||
ELSE p.stock_quantity
|
|
||||||
END as valid_stock
|
|
||||||
FROM products p
|
FROM products p
|
||||||
|
FORCE INDEX (idx_brand)
|
||||||
|
LEFT JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid AND o.updated > ?
|
||||||
WHERE p.brand IS NOT NULL
|
WHERE p.brand IS NOT NULL
|
||||||
),
|
AND p.brand > ?
|
||||||
sales_periods AS (
|
AND (
|
||||||
|
p.updated > ?
|
||||||
|
OR o.id IS NOT NULL
|
||||||
|
)
|
||||||
|
ORDER BY p.brand
|
||||||
|
LIMIT ?
|
||||||
|
`, [lastCalculationTime, lastBrand, lastCalculationTime, BATCH_SIZE]);
|
||||||
|
|
||||||
|
if (batch.length === 0) break;
|
||||||
|
|
||||||
|
// Create temporary tables for better performance
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_product_stats');
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_sales_stats');
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_product_stats (
|
||||||
|
brand VARCHAR(100) NOT NULL,
|
||||||
|
product_count INT,
|
||||||
|
active_products INT,
|
||||||
|
total_stock_units INT,
|
||||||
|
total_stock_cost DECIMAL(15,2),
|
||||||
|
total_stock_retail DECIMAL(15,2),
|
||||||
|
total_revenue DECIMAL(15,2),
|
||||||
|
avg_margin DECIMAL(5,2),
|
||||||
|
PRIMARY KEY (brand),
|
||||||
|
INDEX (total_revenue),
|
||||||
|
INDEX (product_count)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_sales_stats (
|
||||||
|
brand VARCHAR(100) NOT NULL,
|
||||||
|
current_period_sales DECIMAL(15,2),
|
||||||
|
previous_period_sales DECIMAL(15,2),
|
||||||
|
PRIMARY KEY (brand),
|
||||||
|
INDEX (current_period_sales),
|
||||||
|
INDEX (previous_period_sales)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Populate product stats with optimized index usage
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_product_stats
|
||||||
SELECT
|
SELECT
|
||||||
p.brand,
|
p.brand,
|
||||||
SUM(o.quantity * (o.price - COALESCE(o.discount, 0))) as period_revenue,
|
COUNT(DISTINCT p.pid) as product_count,
|
||||||
SUM(o.quantity * (o.price - COALESCE(o.discount, 0) - p.cost_price)) as period_margin,
|
COUNT(DISTINCT CASE WHEN p.visible = true THEN p.pid END) as active_products,
|
||||||
COUNT(DISTINCT DATE(o.date)) as period_days,
|
COALESCE(SUM(p.stock_quantity), 0) as total_stock_units,
|
||||||
CASE
|
COALESCE(SUM(p.stock_quantity * p.cost_price), 0) as total_stock_cost,
|
||||||
WHEN o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 3 MONTH) THEN 'current'
|
COALESCE(SUM(p.stock_quantity * p.price), 0) as total_stock_retail,
|
||||||
WHEN o.date BETWEEN DATE_SUB(CURRENT_DATE, INTERVAL 15 MONTH)
|
COALESCE(SUM(pm.total_revenue), 0) as total_revenue,
|
||||||
AND DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH) THEN 'previous'
|
COALESCE(AVG(NULLIF(pm.avg_margin_percent, 0)), 0) as avg_margin
|
||||||
END as period_type
|
FROM products p
|
||||||
FROM filtered_products p
|
FORCE INDEX (idx_brand)
|
||||||
JOIN orders o ON p.pid = o.pid
|
LEFT JOIN product_metrics pm FORCE INDEX (PRIMARY) ON p.pid = pm.pid
|
||||||
WHERE o.canceled = false
|
WHERE p.brand IN (?)
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 15 MONTH)
|
AND (
|
||||||
GROUP BY p.brand, period_type
|
p.updated > ?
|
||||||
),
|
OR EXISTS (
|
||||||
brand_data AS (
|
SELECT 1 FROM orders o FORCE INDEX (idx_orders_metrics)
|
||||||
SELECT
|
WHERE o.pid = p.pid
|
||||||
p.brand,
|
AND o.updated > ?
|
||||||
COUNT(DISTINCT p.valid_pid) as product_count,
|
|
||||||
COUNT(DISTINCT p.active_pid) as active_products,
|
|
||||||
SUM(p.valid_stock) as total_stock_units,
|
|
||||||
SUM(p.valid_stock * p.cost_price) as total_stock_cost,
|
|
||||||
SUM(p.valid_stock * p.price) as total_stock_retail,
|
|
||||||
COALESCE(SUM(o.quantity * (o.price - COALESCE(o.discount, 0))), 0) as total_revenue,
|
|
||||||
CASE
|
|
||||||
WHEN SUM(o.quantity * o.price) > 0
|
|
||||||
THEN GREATEST(
|
|
||||||
-100.0,
|
|
||||||
LEAST(
|
|
||||||
100.0,
|
|
||||||
(
|
|
||||||
SUM(o.quantity * o.price) - -- Use gross revenue (before discounts)
|
|
||||||
SUM(o.quantity * COALESCE(p.cost_price, 0)) -- Total costs
|
|
||||||
) * 100.0 /
|
|
||||||
NULLIF(SUM(o.quantity * o.price), 0) -- Divide by gross revenue
|
|
||||||
)
|
|
||||||
)
|
|
||||||
ELSE 0
|
|
||||||
END as avg_margin
|
|
||||||
FROM filtered_products p
|
|
||||||
LEFT JOIN orders o ON p.pid = o.pid AND o.canceled = false
|
|
||||||
GROUP BY p.brand
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
bd.brand,
|
|
||||||
bd.product_count,
|
|
||||||
bd.active_products,
|
|
||||||
bd.total_stock_units,
|
|
||||||
bd.total_stock_cost,
|
|
||||||
bd.total_stock_retail,
|
|
||||||
bd.total_revenue,
|
|
||||||
bd.avg_margin,
|
|
||||||
CASE
|
|
||||||
WHEN MAX(CASE WHEN sp.period_type = 'previous' THEN sp.period_revenue END) = 0
|
|
||||||
AND MAX(CASE WHEN sp.period_type = 'current' THEN sp.period_revenue END) > 0
|
|
||||||
THEN 100.0
|
|
||||||
WHEN MAX(CASE WHEN sp.period_type = 'previous' THEN sp.period_revenue END) = 0
|
|
||||||
THEN 0.0
|
|
||||||
ELSE GREATEST(
|
|
||||||
-100.0,
|
|
||||||
LEAST(
|
|
||||||
((MAX(CASE WHEN sp.period_type = 'current' THEN sp.period_revenue END) -
|
|
||||||
MAX(CASE WHEN sp.period_type = 'previous' THEN sp.period_revenue END)) /
|
|
||||||
NULLIF(ABS(MAX(CASE WHEN sp.period_type = 'previous' THEN sp.period_revenue END)), 0)) * 100.0,
|
|
||||||
999.99
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
END as growth_rate
|
GROUP BY p.brand
|
||||||
FROM brand_data bd
|
`, [batch.map(row => row.brand), lastCalculationTime, lastCalculationTime]);
|
||||||
LEFT JOIN sales_periods sp ON bd.brand = sp.brand
|
|
||||||
GROUP BY bd.brand, bd.product_count, bd.active_products, bd.total_stock_units,
|
|
||||||
bd.total_stock_cost, bd.total_stock_retail, bd.total_revenue, bd.avg_margin
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
product_count = VALUES(product_count),
|
|
||||||
active_products = VALUES(active_products),
|
|
||||||
total_stock_units = VALUES(total_stock_units),
|
|
||||||
total_stock_cost = VALUES(total_stock_cost),
|
|
||||||
total_stock_retail = VALUES(total_stock_retail),
|
|
||||||
total_revenue = VALUES(total_revenue),
|
|
||||||
avg_margin = VALUES(avg_margin),
|
|
||||||
growth_rate = VALUES(growth_rate),
|
|
||||||
last_calculated_at = CURRENT_TIMESTAMP
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.97);
|
// Populate sales stats with optimized date handling
|
||||||
outputProgress({
|
await connection.query(`
|
||||||
status: 'running',
|
INSERT INTO temp_sales_stats
|
||||||
operation: 'Brand metrics calculated, starting time-based metrics',
|
WITH date_ranges AS (
|
||||||
current: processedCount,
|
SELECT
|
||||||
total: totalProducts,
|
DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY) as current_start,
|
||||||
elapsed: formatElapsedTime(startTime),
|
CURRENT_DATE as current_end,
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
DATE_SUB(CURRENT_DATE, INTERVAL 60 DAY) as previous_start,
|
||||||
rate: calculateRate(startTime, processedCount),
|
DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY) as previous_end
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
)
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Calculate brand time-based metrics with optimized query
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO brand_time_metrics (
|
|
||||||
brand,
|
|
||||||
year,
|
|
||||||
month,
|
|
||||||
product_count,
|
|
||||||
active_products,
|
|
||||||
total_stock_units,
|
|
||||||
total_stock_cost,
|
|
||||||
total_stock_retail,
|
|
||||||
total_revenue,
|
|
||||||
avg_margin
|
|
||||||
)
|
|
||||||
WITH filtered_products AS (
|
|
||||||
SELECT
|
|
||||||
p.*,
|
|
||||||
CASE WHEN p.stock_quantity <= 5000 THEN p.pid END as valid_pid,
|
|
||||||
CASE WHEN p.visible = true AND p.stock_quantity <= 5000 THEN p.pid END as active_pid,
|
|
||||||
CASE
|
|
||||||
WHEN p.stock_quantity IS NULL OR p.stock_quantity < 0 OR p.stock_quantity > 5000 THEN 0
|
|
||||||
ELSE p.stock_quantity
|
|
||||||
END as valid_stock
|
|
||||||
FROM products p
|
|
||||||
WHERE p.brand IS NOT NULL
|
|
||||||
),
|
|
||||||
monthly_metrics AS (
|
|
||||||
SELECT
|
SELECT
|
||||||
p.brand,
|
p.brand,
|
||||||
YEAR(o.date) as year,
|
COALESCE(SUM(
|
||||||
MONTH(o.date) as month,
|
CASE WHEN o.date >= dr.current_start
|
||||||
COUNT(DISTINCT p.valid_pid) as product_count,
|
THEN o.quantity * o.price
|
||||||
COUNT(DISTINCT p.active_pid) as active_products,
|
|
||||||
SUM(p.valid_stock) as total_stock_units,
|
|
||||||
SUM(p.valid_stock * p.cost_price) as total_stock_cost,
|
|
||||||
SUM(p.valid_stock * p.price) as total_stock_retail,
|
|
||||||
SUM(o.quantity * o.price) as total_revenue,
|
|
||||||
CASE
|
|
||||||
WHEN SUM(o.quantity * o.price) > 0
|
|
||||||
THEN GREATEST(
|
|
||||||
-100.0,
|
|
||||||
LEAST(
|
|
||||||
100.0,
|
|
||||||
(
|
|
||||||
SUM(o.quantity * o.price) - -- Use gross revenue (before discounts)
|
|
||||||
SUM(o.quantity * COALESCE(p.cost_price, 0)) -- Total costs
|
|
||||||
) * 100.0 /
|
|
||||||
NULLIF(SUM(o.quantity * o.price), 0) -- Divide by gross revenue
|
|
||||||
)
|
|
||||||
)
|
|
||||||
ELSE 0
|
ELSE 0
|
||||||
END as avg_margin
|
END
|
||||||
FROM filtered_products p
|
), 0) as current_period_sales,
|
||||||
LEFT JOIN orders o ON p.pid = o.pid AND o.canceled = false
|
COALESCE(SUM(
|
||||||
WHERE o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
CASE WHEN o.date >= dr.previous_start AND o.date < dr.current_start
|
||||||
GROUP BY p.brand, YEAR(o.date), MONTH(o.date)
|
THEN o.quantity * o.price
|
||||||
)
|
ELSE 0
|
||||||
SELECT *
|
END
|
||||||
FROM monthly_metrics
|
), 0) as previous_period_sales
|
||||||
ON DUPLICATE KEY UPDATE
|
FROM products p
|
||||||
product_count = VALUES(product_count),
|
FORCE INDEX (idx_brand)
|
||||||
active_products = VALUES(active_products),
|
INNER JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid
|
||||||
total_stock_units = VALUES(total_stock_units),
|
CROSS JOIN date_ranges dr
|
||||||
total_stock_cost = VALUES(total_stock_cost),
|
WHERE p.brand IN (?)
|
||||||
total_stock_retail = VALUES(total_stock_retail),
|
AND o.canceled = false
|
||||||
total_revenue = VALUES(total_revenue),
|
AND o.date >= dr.previous_start
|
||||||
avg_margin = VALUES(avg_margin)
|
AND o.updated > ?
|
||||||
`);
|
GROUP BY p.brand
|
||||||
|
`, [batch.map(row => row.brand), lastCalculationTime]);
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.99);
|
// Update metrics using temp tables with optimized calculations
|
||||||
outputProgress({
|
await connection.query(`
|
||||||
status: 'running',
|
INSERT INTO brand_metrics (
|
||||||
operation: 'Brand time-based metrics calculated',
|
brand,
|
||||||
current: processedCount,
|
product_count,
|
||||||
total: totalProducts,
|
active_products,
|
||||||
elapsed: formatElapsedTime(startTime),
|
total_stock_units,
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
total_stock_cost,
|
||||||
rate: calculateRate(startTime, processedCount),
|
total_stock_retail,
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
total_revenue,
|
||||||
timing: {
|
avg_margin,
|
||||||
start_time: new Date(startTime).toISOString(),
|
growth_rate,
|
||||||
end_time: new Date().toISOString(),
|
last_calculated_at
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
)
|
||||||
}
|
SELECT
|
||||||
});
|
ps.brand,
|
||||||
|
ps.product_count,
|
||||||
|
ps.active_products,
|
||||||
|
ps.total_stock_units,
|
||||||
|
ps.total_stock_cost,
|
||||||
|
ps.total_stock_retail,
|
||||||
|
ps.total_revenue,
|
||||||
|
ps.avg_margin,
|
||||||
|
CASE
|
||||||
|
WHEN COALESCE(ss.previous_period_sales, 0) = 0 AND COALESCE(ss.current_period_sales, 0) > 0 THEN 100
|
||||||
|
WHEN COALESCE(ss.previous_period_sales, 0) = 0 THEN 0
|
||||||
|
ELSE ROUND(LEAST(999.99, GREATEST(-100,
|
||||||
|
((ss.current_period_sales / NULLIF(ss.previous_period_sales, 0)) - 1) * 100
|
||||||
|
)), 2)
|
||||||
|
END as growth_rate,
|
||||||
|
NOW() as last_calculated_at
|
||||||
|
FROM temp_product_stats ps
|
||||||
|
LEFT JOIN temp_sales_stats ss ON ps.brand = ss.brand
|
||||||
|
ON DUPLICATE KEY UPDATE
|
||||||
|
product_count = VALUES(product_count),
|
||||||
|
active_products = VALUES(active_products),
|
||||||
|
total_stock_units = VALUES(total_stock_units),
|
||||||
|
total_stock_cost = VALUES(total_stock_cost),
|
||||||
|
total_stock_retail = VALUES(total_stock_retail),
|
||||||
|
total_revenue = VALUES(total_revenue),
|
||||||
|
avg_margin = VALUES(avg_margin),
|
||||||
|
growth_rate = VALUES(growth_rate),
|
||||||
|
last_calculated_at = NOW()
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Clean up temp tables
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_product_stats');
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_sales_stats');
|
||||||
|
|
||||||
|
lastBrand = batch[batch.length - 1].brand;
|
||||||
|
processedBrands += batch.length; // Increment processed *brands*
|
||||||
|
|
||||||
|
outputProgress({
|
||||||
|
status: 'running',
|
||||||
|
operation: 'Processing brand metrics batch',
|
||||||
|
current: processedCount + processedBrands, // Use cumulative brand count
|
||||||
|
total: totalBrands, // Report total *brands*
|
||||||
|
elapsed: formatElapsedTime(startTime),
|
||||||
|
remaining: estimateRemaining(startTime, processedCount + processedBrands, totalBrands),
|
||||||
|
rate: calculateRate(startTime, processedCount + processedBrands),
|
||||||
|
percentage: (((processedCount + processedBrands) / totalBrands) * 100).toFixed(1), // Base on brands
|
||||||
|
timing: {
|
||||||
|
start_time: new Date(startTime).toISOString(),
|
||||||
|
end_time: new Date().toISOString(),
|
||||||
|
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
|
|
||||||
// Update calculate_status
|
// Update calculate_status
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
||||||
@@ -298,8 +275,8 @@ async function calculateBrandMetrics(startTime, totalProducts, processedCount =
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: 0, // Not directly processing products
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -4,19 +4,53 @@ const { getConnection } = require('./utils/db');
|
|||||||
async function calculateCategoryMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
async function calculateCategoryMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
const BATCH_SIZE = 5000;
|
||||||
|
let myProcessedProducts = 0; // Not *directly* processing products, but tracking categories
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'category_metrics'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
// Get total count of categories needing updates
|
||||||
|
const [categoryCount] = await connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT c.cat_id) as count
|
||||||
|
FROM categories c
|
||||||
|
JOIN product_categories pc ON c.cat_id = pc.cat_id
|
||||||
|
LEFT JOIN products p ON pc.pid = p.pid AND p.updated > ?
|
||||||
|
LEFT JOIN orders o ON p.pid = o.pid AND o.updated > ?
|
||||||
|
WHERE c.status = 'active'
|
||||||
|
AND (
|
||||||
|
p.pid IS NOT NULL
|
||||||
|
OR o.id IS NOT NULL
|
||||||
|
)
|
||||||
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
const totalCategories = categoryCount[0].count; // Track total *categories*
|
||||||
|
|
||||||
|
if (totalCategories === 0) {
|
||||||
|
console.log('No categories need metric updates');
|
||||||
|
return {
|
||||||
|
processedProducts: 0, // Not directly processing products
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
operation: 'Category metrics calculation cancelled',
|
operation: 'Category metrics calculation cancelled',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalCategories, // Report total *categories*
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: null,
|
remaining: null,
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalCategories) * 100).toFixed(1), // Base on categories
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -24,76 +58,22 @@ async function calculateCategoryMetrics(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: 0, // Not directly processing products
|
||||||
processedOrders: 0,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get order count that will be processed
|
|
||||||
const [orderCount] = await connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM orders o
|
|
||||||
WHERE o.canceled = false
|
|
||||||
`);
|
|
||||||
processedOrders = orderCount[0].count;
|
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting category metrics calculation',
|
operation: 'Starting category metrics calculation',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalCategories, // Report total *categories*
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
remaining: estimateRemaining(startTime, processedCount, totalCategories),
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalCategories) * 100).toFixed(1), // Base on categories
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// First, calculate base category metrics
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO category_metrics (
|
|
||||||
category_id,
|
|
||||||
product_count,
|
|
||||||
active_products,
|
|
||||||
total_value,
|
|
||||||
status,
|
|
||||||
last_calculated_at
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
c.cat_id,
|
|
||||||
COUNT(DISTINCT p.pid) as product_count,
|
|
||||||
COUNT(DISTINCT CASE WHEN p.visible = true THEN p.pid END) as active_products,
|
|
||||||
COALESCE(SUM(p.stock_quantity * p.cost_price), 0) as total_value,
|
|
||||||
c.status,
|
|
||||||
NOW() as last_calculated_at
|
|
||||||
FROM categories c
|
|
||||||
LEFT JOIN product_categories pc ON c.cat_id = pc.cat_id
|
|
||||||
LEFT JOIN products p ON pc.pid = p.pid
|
|
||||||
GROUP BY c.cat_id, c.status
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
product_count = VALUES(product_count),
|
|
||||||
active_products = VALUES(active_products),
|
|
||||||
total_value = VALUES(total_value),
|
|
||||||
status = VALUES(status),
|
|
||||||
last_calculated_at = VALUES(last_calculated_at)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.90);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Base category metrics calculated, updating with margin data',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -101,399 +81,196 @@ async function calculateCategoryMetrics(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
if (isCancelled) return {
|
// Process in batches
|
||||||
processedProducts: processedCount,
|
let lastCatId = 0;
|
||||||
processedOrders,
|
let processedCategories = 0; // Track processed categories
|
||||||
processedPurchaseOrders: 0,
|
while (true) {
|
||||||
success
|
if (isCancelled) break;
|
||||||
};
|
|
||||||
|
|
||||||
// Then update with margin and turnover data
|
const [batch] = await connection.query(`
|
||||||
await connection.query(`
|
SELECT DISTINCT c.cat_id
|
||||||
WITH category_sales AS (
|
FROM categories c
|
||||||
|
FORCE INDEX (PRIMARY)
|
||||||
|
JOIN product_categories pc FORCE INDEX (idx_category) ON c.cat_id = pc.cat_id
|
||||||
|
LEFT JOIN products p FORCE INDEX (PRIMARY) ON pc.pid = p.pid AND p.updated > ?
|
||||||
|
LEFT JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid AND o.updated > ?
|
||||||
|
WHERE c.status = 'active'
|
||||||
|
AND c.cat_id > ?
|
||||||
|
AND (
|
||||||
|
p.pid IS NOT NULL
|
||||||
|
OR o.id IS NOT NULL
|
||||||
|
)
|
||||||
|
ORDER BY c.cat_id
|
||||||
|
LIMIT ?
|
||||||
|
`, [lastCalculationTime, lastCalculationTime, lastCatId, BATCH_SIZE]);
|
||||||
|
|
||||||
|
if (batch.length === 0) break;
|
||||||
|
|
||||||
|
// Create temporary tables for better performance
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_product_stats');
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_sales_stats');
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_product_stats (
|
||||||
|
cat_id BIGINT NOT NULL,
|
||||||
|
product_count INT,
|
||||||
|
active_products INT,
|
||||||
|
total_value DECIMAL(15,2),
|
||||||
|
avg_margin DECIMAL(5,2),
|
||||||
|
turnover_rate DECIMAL(10,2),
|
||||||
|
PRIMARY KEY (cat_id),
|
||||||
|
INDEX (product_count),
|
||||||
|
INDEX (total_value)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_sales_stats (
|
||||||
|
cat_id BIGINT NOT NULL,
|
||||||
|
recent_revenue DECIMAL(15,2),
|
||||||
|
previous_revenue DECIMAL(15,2),
|
||||||
|
PRIMARY KEY (cat_id),
|
||||||
|
INDEX (recent_revenue),
|
||||||
|
INDEX (previous_revenue)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Populate product stats with optimized index usage
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_product_stats
|
||||||
SELECT
|
SELECT
|
||||||
pc.cat_id,
|
c.cat_id,
|
||||||
SUM(o.quantity * o.price) as total_sales,
|
COUNT(DISTINCT p.pid) as product_count,
|
||||||
SUM(o.quantity * (o.price - p.cost_price)) as total_margin,
|
COUNT(DISTINCT CASE WHEN p.visible = true THEN p.pid END) as active_products,
|
||||||
SUM(o.quantity) as units_sold,
|
COALESCE(SUM(p.stock_quantity * p.cost_price), 0) as total_value,
|
||||||
AVG(GREATEST(p.stock_quantity, 0)) as avg_stock,
|
COALESCE(AVG(NULLIF(pm.avg_margin_percent, 0)), 0) as avg_margin,
|
||||||
COUNT(DISTINCT DATE(o.date)) as active_days
|
COALESCE(AVG(NULLIF(pm.turnover_rate, 0)), 0) as turnover_rate
|
||||||
FROM product_categories pc
|
FROM categories c
|
||||||
JOIN products p ON pc.pid = p.pid
|
FORCE INDEX (PRIMARY)
|
||||||
JOIN orders o ON p.pid = o.pid
|
INNER JOIN product_categories pc FORCE INDEX (idx_category) ON c.cat_id = pc.cat_id
|
||||||
LEFT JOIN turnover_config tc ON
|
LEFT JOIN products p FORCE INDEX (PRIMARY) ON pc.pid = p.pid
|
||||||
(tc.category_id = pc.cat_id AND tc.vendor = p.vendor) OR
|
LEFT JOIN product_metrics pm FORCE INDEX (PRIMARY) ON p.pid = pm.pid
|
||||||
(tc.category_id = pc.cat_id AND tc.vendor IS NULL) OR
|
WHERE c.cat_id IN (?)
|
||||||
(tc.category_id IS NULL AND tc.vendor = p.vendor) OR
|
AND (
|
||||||
(tc.category_id IS NULL AND tc.vendor IS NULL)
|
p.updated > ?
|
||||||
WHERE o.canceled = false
|
OR EXISTS (
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL COALESCE(tc.calculation_period_days, 30) DAY)
|
SELECT 1 FROM orders o FORCE INDEX (idx_orders_metrics)
|
||||||
GROUP BY pc.cat_id
|
WHERE o.pid = p.pid
|
||||||
)
|
AND o.updated > ?
|
||||||
UPDATE category_metrics cm
|
)
|
||||||
JOIN category_sales cs ON cm.category_id = cs.cat_id
|
|
||||||
LEFT JOIN turnover_config tc ON
|
|
||||||
(tc.category_id = cm.category_id AND tc.vendor IS NULL) OR
|
|
||||||
(tc.category_id IS NULL AND tc.vendor IS NULL)
|
|
||||||
SET
|
|
||||||
cm.avg_margin = COALESCE(cs.total_margin * 100.0 / NULLIF(cs.total_sales, 0), 0),
|
|
||||||
cm.turnover_rate = CASE
|
|
||||||
WHEN cs.avg_stock > 0 AND cs.active_days > 0
|
|
||||||
THEN LEAST(
|
|
||||||
(cs.units_sold / cs.avg_stock) * (365.0 / cs.active_days),
|
|
||||||
999.99
|
|
||||||
)
|
)
|
||||||
ELSE 0
|
GROUP BY c.cat_id
|
||||||
END,
|
`, [batch.map(row => row.cat_id), lastCalculationTime, lastCalculationTime]);
|
||||||
cm.last_calculated_at = NOW()
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.95);
|
// Populate sales stats with optimized date handling
|
||||||
outputProgress({
|
await connection.query(`
|
||||||
status: 'running',
|
INSERT INTO temp_sales_stats
|
||||||
operation: 'Margin data updated, calculating growth rates',
|
WITH date_ranges AS (
|
||||||
current: processedCount,
|
SELECT
|
||||||
total: totalProducts,
|
DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY) as current_start,
|
||||||
elapsed: formatElapsedTime(startTime),
|
CURRENT_DATE as current_end,
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
DATE_SUB(CURRENT_DATE, INTERVAL 60 DAY) as previous_start,
|
||||||
rate: calculateRate(startTime, processedCount),
|
DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY) as previous_end
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
)
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Finally update growth rates
|
|
||||||
await connection.query(`
|
|
||||||
WITH current_period AS (
|
|
||||||
SELECT
|
SELECT
|
||||||
pc.cat_id,
|
c.cat_id,
|
||||||
SUM(o.quantity * (o.price - COALESCE(o.discount, 0)) /
|
COALESCE(SUM(
|
||||||
(1 + COALESCE(ss.seasonality_factor, 0))) as revenue,
|
CASE WHEN o.date >= dr.current_start
|
||||||
SUM(o.quantity * (o.price - COALESCE(o.discount, 0) - p.cost_price)) as gross_profit,
|
THEN o.quantity * o.price
|
||||||
COUNT(DISTINCT DATE(o.date)) as days
|
ELSE 0
|
||||||
FROM product_categories pc
|
END
|
||||||
JOIN products p ON pc.pid = p.pid
|
), 0) as recent_revenue,
|
||||||
JOIN orders o ON p.pid = o.pid
|
COALESCE(SUM(
|
||||||
LEFT JOIN sales_seasonality ss ON MONTH(o.date) = ss.month
|
CASE WHEN o.date >= dr.previous_start AND o.date < dr.current_start
|
||||||
WHERE o.canceled = false
|
THEN o.quantity * o.price
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 3 MONTH)
|
ELSE 0
|
||||||
GROUP BY pc.cat_id
|
END
|
||||||
),
|
), 0) as previous_revenue
|
||||||
previous_period AS (
|
FROM categories c
|
||||||
SELECT
|
FORCE INDEX (PRIMARY)
|
||||||
pc.cat_id,
|
INNER JOIN product_categories pc FORCE INDEX (idx_category) ON c.cat_id = pc.cat_id
|
||||||
SUM(o.quantity * (o.price - COALESCE(o.discount, 0)) /
|
INNER JOIN products p FORCE INDEX (PRIMARY) ON pc.pid = p.pid
|
||||||
(1 + COALESCE(ss.seasonality_factor, 0))) as revenue,
|
INNER JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid
|
||||||
COUNT(DISTINCT DATE(o.date)) as days
|
|
||||||
FROM product_categories pc
|
|
||||||
JOIN products p ON pc.pid = p.pid
|
|
||||||
JOIN orders o ON p.pid = o.pid
|
|
||||||
LEFT JOIN sales_seasonality ss ON MONTH(o.date) = ss.month
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND o.date BETWEEN DATE_SUB(CURRENT_DATE, INTERVAL 15 MONTH)
|
|
||||||
AND DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
|
||||||
GROUP BY pc.cat_id
|
|
||||||
),
|
|
||||||
trend_data AS (
|
|
||||||
SELECT
|
|
||||||
pc.cat_id,
|
|
||||||
MONTH(o.date) as month,
|
|
||||||
SUM(o.quantity * (o.price - COALESCE(o.discount, 0)) /
|
|
||||||
(1 + COALESCE(ss.seasonality_factor, 0))) as revenue,
|
|
||||||
COUNT(DISTINCT DATE(o.date)) as days_in_month
|
|
||||||
FROM product_categories pc
|
|
||||||
JOIN products p ON pc.pid = p.pid
|
|
||||||
JOIN orders o ON p.pid = o.pid
|
|
||||||
LEFT JOIN sales_seasonality ss ON MONTH(o.date) = ss.month
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 15 MONTH)
|
|
||||||
GROUP BY pc.cat_id, MONTH(o.date)
|
|
||||||
),
|
|
||||||
trend_stats AS (
|
|
||||||
SELECT
|
|
||||||
cat_id,
|
|
||||||
COUNT(*) as n,
|
|
||||||
AVG(month) as avg_x,
|
|
||||||
AVG(revenue / NULLIF(days_in_month, 0)) as avg_y,
|
|
||||||
SUM(month * (revenue / NULLIF(days_in_month, 0))) as sum_xy,
|
|
||||||
SUM(month * month) as sum_xx
|
|
||||||
FROM trend_data
|
|
||||||
GROUP BY cat_id
|
|
||||||
HAVING COUNT(*) >= 6
|
|
||||||
),
|
|
||||||
trend_analysis AS (
|
|
||||||
SELECT
|
|
||||||
cat_id,
|
|
||||||
((n * sum_xy) - (avg_x * n * avg_y)) /
|
|
||||||
NULLIF((n * sum_xx) - (n * avg_x * avg_x), 0) as trend_slope,
|
|
||||||
avg_y as avg_daily_revenue
|
|
||||||
FROM trend_stats
|
|
||||||
),
|
|
||||||
margin_calc AS (
|
|
||||||
SELECT
|
|
||||||
pc.cat_id,
|
|
||||||
CASE
|
|
||||||
WHEN SUM(o.quantity * o.price) > 0 THEN
|
|
||||||
GREATEST(
|
|
||||||
-100.0,
|
|
||||||
LEAST(
|
|
||||||
100.0,
|
|
||||||
(
|
|
||||||
SUM(o.quantity * o.price) - -- Use gross revenue (before discounts)
|
|
||||||
SUM(o.quantity * COALESCE(p.cost_price, 0)) -- Total costs
|
|
||||||
) * 100.0 /
|
|
||||||
NULLIF(SUM(o.quantity * o.price), 0) -- Divide by gross revenue
|
|
||||||
)
|
|
||||||
)
|
|
||||||
ELSE NULL
|
|
||||||
END as avg_margin
|
|
||||||
FROM product_categories pc
|
|
||||||
JOIN products p ON pc.pid = p.pid
|
|
||||||
JOIN orders o ON p.pid = o.pid
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 3 MONTH)
|
|
||||||
GROUP BY pc.cat_id
|
|
||||||
)
|
|
||||||
UPDATE category_metrics cm
|
|
||||||
LEFT JOIN current_period cp ON cm.category_id = cp.cat_id
|
|
||||||
LEFT JOIN previous_period pp ON cm.category_id = pp.cat_id
|
|
||||||
LEFT JOIN trend_analysis ta ON cm.category_id = ta.cat_id
|
|
||||||
LEFT JOIN margin_calc mc ON cm.category_id = mc.cat_id
|
|
||||||
SET
|
|
||||||
cm.growth_rate = CASE
|
|
||||||
WHEN pp.revenue = 0 AND COALESCE(cp.revenue, 0) > 0 THEN 100.0
|
|
||||||
WHEN pp.revenue = 0 OR cp.revenue IS NULL THEN 0.0
|
|
||||||
WHEN ta.trend_slope IS NOT NULL THEN
|
|
||||||
GREATEST(
|
|
||||||
-100.0,
|
|
||||||
LEAST(
|
|
||||||
(ta.trend_slope / NULLIF(ta.avg_daily_revenue, 0)) * 365 * 100,
|
|
||||||
999.99
|
|
||||||
)
|
|
||||||
)
|
|
||||||
ELSE
|
|
||||||
GREATEST(
|
|
||||||
-100.0,
|
|
||||||
LEAST(
|
|
||||||
((COALESCE(cp.revenue, 0) - pp.revenue) /
|
|
||||||
NULLIF(ABS(pp.revenue), 0)) * 100.0,
|
|
||||||
999.99
|
|
||||||
)
|
|
||||||
)
|
|
||||||
END,
|
|
||||||
cm.avg_margin = COALESCE(mc.avg_margin, cm.avg_margin),
|
|
||||||
cm.last_calculated_at = NOW()
|
|
||||||
WHERE cp.cat_id IS NOT NULL OR pp.cat_id IS NOT NULL
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.97);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Growth rates calculated, updating time-based metrics',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Calculate time-based metrics
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO category_time_metrics (
|
|
||||||
category_id,
|
|
||||||
year,
|
|
||||||
month,
|
|
||||||
product_count,
|
|
||||||
active_products,
|
|
||||||
total_value,
|
|
||||||
total_revenue,
|
|
||||||
avg_margin,
|
|
||||||
turnover_rate
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
pc.cat_id,
|
|
||||||
YEAR(o.date) as year,
|
|
||||||
MONTH(o.date) as month,
|
|
||||||
COUNT(DISTINCT p.pid) as product_count,
|
|
||||||
COUNT(DISTINCT CASE WHEN p.visible = true THEN p.pid END) as active_products,
|
|
||||||
SUM(p.stock_quantity * p.cost_price) as total_value,
|
|
||||||
SUM(o.quantity * o.price) as total_revenue,
|
|
||||||
CASE
|
|
||||||
WHEN SUM(o.quantity * o.price) > 0 THEN
|
|
||||||
LEAST(
|
|
||||||
GREATEST(
|
|
||||||
SUM(o.quantity * (o.price - GREATEST(p.cost_price, 0))) * 100.0 /
|
|
||||||
SUM(o.quantity * o.price),
|
|
||||||
-100
|
|
||||||
),
|
|
||||||
100
|
|
||||||
)
|
|
||||||
ELSE 0
|
|
||||||
END as avg_margin,
|
|
||||||
COALESCE(
|
|
||||||
LEAST(
|
|
||||||
SUM(o.quantity) / NULLIF(AVG(GREATEST(p.stock_quantity, 0)), 0),
|
|
||||||
999.99
|
|
||||||
),
|
|
||||||
0
|
|
||||||
) as turnover_rate
|
|
||||||
FROM product_categories pc
|
|
||||||
JOIN products p ON pc.pid = p.pid
|
|
||||||
JOIN orders o ON p.pid = o.pid
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
|
||||||
GROUP BY pc.cat_id, YEAR(o.date), MONTH(o.date)
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
product_count = VALUES(product_count),
|
|
||||||
active_products = VALUES(active_products),
|
|
||||||
total_value = VALUES(total_value),
|
|
||||||
total_revenue = VALUES(total_revenue),
|
|
||||||
avg_margin = VALUES(avg_margin),
|
|
||||||
turnover_rate = VALUES(turnover_rate)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.99);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Time-based metrics calculated, updating category-sales metrics',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Calculate category-sales metrics
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO category_sales_metrics (
|
|
||||||
category_id,
|
|
||||||
brand,
|
|
||||||
period_start,
|
|
||||||
period_end,
|
|
||||||
avg_daily_sales,
|
|
||||||
total_sold,
|
|
||||||
num_products,
|
|
||||||
avg_price,
|
|
||||||
last_calculated_at
|
|
||||||
)
|
|
||||||
WITH date_ranges AS (
|
|
||||||
SELECT
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY) as period_start,
|
|
||||||
CURRENT_DATE as period_end
|
|
||||||
UNION ALL
|
|
||||||
SELECT
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY),
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 31 DAY)
|
|
||||||
UNION ALL
|
|
||||||
SELECT
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 180 DAY),
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 91 DAY)
|
|
||||||
UNION ALL
|
|
||||||
SELECT
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 365 DAY),
|
|
||||||
DATE_SUB(CURRENT_DATE, INTERVAL 181 DAY)
|
|
||||||
),
|
|
||||||
sales_data AS (
|
|
||||||
SELECT
|
|
||||||
pc.cat_id,
|
|
||||||
COALESCE(p.brand, 'Unknown') as brand,
|
|
||||||
dr.period_start,
|
|
||||||
dr.period_end,
|
|
||||||
COUNT(DISTINCT p.pid) as num_products,
|
|
||||||
SUM(o.quantity) as total_sold,
|
|
||||||
SUM(o.quantity * o.price) as total_revenue,
|
|
||||||
COUNT(DISTINCT DATE(o.date)) as num_days
|
|
||||||
FROM products p
|
|
||||||
JOIN product_categories pc ON p.pid = pc.pid
|
|
||||||
JOIN orders o ON p.pid = o.pid
|
|
||||||
CROSS JOIN date_ranges dr
|
CROSS JOIN date_ranges dr
|
||||||
WHERE o.canceled = false
|
WHERE c.cat_id IN (?)
|
||||||
AND o.date BETWEEN dr.period_start AND dr.period_end
|
AND o.canceled = false
|
||||||
GROUP BY pc.cat_id, p.brand, dr.period_start, dr.period_end
|
AND o.date >= dr.previous_start
|
||||||
)
|
AND o.updated > ?
|
||||||
SELECT
|
GROUP BY c.cat_id
|
||||||
cat_id as category_id,
|
`, [batch.map(row => row.cat_id), lastCalculationTime]);
|
||||||
brand,
|
|
||||||
period_start,
|
|
||||||
period_end,
|
|
||||||
CASE
|
|
||||||
WHEN num_days > 0
|
|
||||||
THEN total_sold / num_days
|
|
||||||
ELSE 0
|
|
||||||
END as avg_daily_sales,
|
|
||||||
total_sold,
|
|
||||||
num_products,
|
|
||||||
CASE
|
|
||||||
WHEN total_sold > 0
|
|
||||||
THEN total_revenue / total_sold
|
|
||||||
ELSE 0
|
|
||||||
END as avg_price,
|
|
||||||
NOW() as last_calculated_at
|
|
||||||
FROM sales_data
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
avg_daily_sales = VALUES(avg_daily_sales),
|
|
||||||
total_sold = VALUES(total_sold),
|
|
||||||
num_products = VALUES(num_products),
|
|
||||||
avg_price = VALUES(avg_price),
|
|
||||||
last_calculated_at = VALUES(last_calculated_at)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 1.0);
|
// Update metrics using temp tables with optimized calculations
|
||||||
outputProgress({
|
await connection.query(`
|
||||||
status: 'running',
|
INSERT INTO category_metrics (
|
||||||
operation: 'Category-sales metrics calculated',
|
category_id,
|
||||||
current: processedCount,
|
product_count,
|
||||||
total: totalProducts,
|
active_products,
|
||||||
elapsed: formatElapsedTime(startTime),
|
total_value,
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
avg_margin,
|
||||||
rate: calculateRate(startTime, processedCount),
|
turnover_rate,
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
growth_rate,
|
||||||
timing: {
|
status,
|
||||||
start_time: new Date(startTime).toISOString(),
|
last_calculated_at
|
||||||
end_time: new Date().toISOString(),
|
)
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
SELECT
|
||||||
}
|
c.cat_id,
|
||||||
});
|
ps.product_count,
|
||||||
|
ps.active_products,
|
||||||
|
ps.total_value,
|
||||||
|
ps.avg_margin,
|
||||||
|
ps.turnover_rate,
|
||||||
|
CASE
|
||||||
|
WHEN COALESCE(ss.previous_revenue, 0) = 0 AND COALESCE(ss.recent_revenue, 0) > 0 THEN 100
|
||||||
|
WHEN COALESCE(ss.previous_revenue, 0) = 0 THEN 0
|
||||||
|
ELSE ROUND(LEAST(999.99, GREATEST(-100,
|
||||||
|
((ss.recent_revenue / NULLIF(ss.previous_revenue, 0)) - 1) * 100
|
||||||
|
)), 2)
|
||||||
|
END as growth_rate,
|
||||||
|
c.status,
|
||||||
|
NOW() as last_calculated_at
|
||||||
|
FROM categories c
|
||||||
|
FORCE INDEX (PRIMARY)
|
||||||
|
LEFT JOIN temp_product_stats ps ON c.cat_id = ps.cat_id
|
||||||
|
LEFT JOIN temp_sales_stats ss ON c.cat_id = ss.cat_id
|
||||||
|
WHERE c.cat_id IN (?)
|
||||||
|
ON DUPLICATE KEY UPDATE
|
||||||
|
product_count = VALUES(product_count),
|
||||||
|
active_products = VALUES(active_products),
|
||||||
|
total_value = VALUES(total_value),
|
||||||
|
avg_margin = VALUES(avg_margin),
|
||||||
|
turnover_rate = VALUES(turnover_rate),
|
||||||
|
growth_rate = VALUES(growth_rate),
|
||||||
|
status = VALUES(status),
|
||||||
|
last_calculated_at = NOW()
|
||||||
|
`, [batch.map(row => row.cat_id)]);
|
||||||
|
|
||||||
|
// Clean up temp tables
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_product_stats');
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_sales_stats');
|
||||||
|
|
||||||
|
lastCatId = batch[batch.length - 1].cat_id;
|
||||||
|
processedCategories += batch.length; // Increment processed *categories*
|
||||||
|
|
||||||
|
outputProgress({
|
||||||
|
status: 'running',
|
||||||
|
operation: 'Processing category metrics batch',
|
||||||
|
current: processedCount + processedCategories, // Use cumulative category count
|
||||||
|
total: totalCategories, // Report total *categories*
|
||||||
|
elapsed: formatElapsedTime(startTime),
|
||||||
|
remaining: estimateRemaining(startTime, processedCount + processedCategories, totalCategories),
|
||||||
|
rate: calculateRate(startTime, processedCount + processedCategories),
|
||||||
|
percentage: (((processedCount + processedCategories) / totalCategories) * 100).toFixed(1), // Base on categories
|
||||||
|
timing: {
|
||||||
|
start_time: new Date(startTime).toISOString(),
|
||||||
|
end_time: new Date().toISOString(),
|
||||||
|
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
|
|
||||||
// Update calculate_status
|
// Update calculate_status
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
||||||
@@ -502,8 +279,8 @@ async function calculateCategoryMetrics(startTime, totalProducts, processedCount
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: 0, // Not directly processing products
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -4,9 +4,40 @@ const { getConnection } = require('./utils/db');
|
|||||||
async function calculateFinancialMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
async function calculateFinancialMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
const BATCH_SIZE = 5000;
|
||||||
|
let myProcessedProducts = 0; // Track products processed *within this module*
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'financial_metrics'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
// Get total count of products needing updates
|
||||||
|
if (!totalProducts) {
|
||||||
|
const [productCount] = await connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT p.pid) as count
|
||||||
|
FROM products p
|
||||||
|
LEFT JOIN orders o ON p.pid = o.pid AND o.updated > ?
|
||||||
|
WHERE p.updated > ?
|
||||||
|
OR o.pid IS NOT NULL
|
||||||
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
totalProducts = productCount[0].count;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (totalProducts === 0) {
|
||||||
|
console.log('No products need financial metric updates');
|
||||||
|
return {
|
||||||
|
processedProducts: 0,
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
@@ -24,22 +55,13 @@ async function calculateFinancialMetrics(startTime, totalProducts, processedCoun
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts,
|
||||||
processedOrders: 0,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get order count that will be processed
|
|
||||||
const [orderCount] = await connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM orders o
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND DATE(o.date) >= DATE_SUB(CURDATE(), INTERVAL 12 MONTH)
|
|
||||||
`);
|
|
||||||
processedOrders = orderCount[0].count;
|
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting financial metrics calculation',
|
operation: 'Starting financial metrics calculation',
|
||||||
@@ -56,110 +78,80 @@ async function calculateFinancialMetrics(startTime, totalProducts, processedCoun
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Calculate financial metrics with optimized query
|
// Process in batches
|
||||||
await connection.query(`
|
let lastPid = 0;
|
||||||
WITH product_financials AS (
|
while (true) {
|
||||||
SELECT
|
if (isCancelled) break;
|
||||||
p.pid,
|
|
||||||
p.cost_price * p.stock_quantity as inventory_value,
|
const [batch] = await connection.query(`
|
||||||
SUM(o.quantity * o.price) as total_revenue,
|
SELECT DISTINCT p.pid
|
||||||
SUM(o.quantity * p.cost_price) as cost_of_goods_sold,
|
|
||||||
SUM(o.quantity * (o.price - p.cost_price)) as gross_profit,
|
|
||||||
MIN(o.date) as first_sale_date,
|
|
||||||
MAX(o.date) as last_sale_date,
|
|
||||||
DATEDIFF(MAX(o.date), MIN(o.date)) + 1 as calculation_period_days,
|
|
||||||
COUNT(DISTINCT DATE(o.date)) as active_days
|
|
||||||
FROM products p
|
FROM products p
|
||||||
LEFT JOIN orders o ON p.pid = o.pid
|
LEFT JOIN orders o ON p.pid = o.pid
|
||||||
WHERE o.canceled = false
|
WHERE p.pid > ?
|
||||||
AND DATE(o.date) >= DATE_SUB(CURDATE(), INTERVAL 12 MONTH)
|
AND (
|
||||||
GROUP BY p.pid
|
p.updated > ?
|
||||||
)
|
OR EXISTS (
|
||||||
UPDATE product_metrics pm
|
SELECT 1 FROM orders o2
|
||||||
JOIN product_financials pf ON pm.pid = pf.pid
|
WHERE o2.pid = p.pid
|
||||||
SET
|
AND o2.updated > ?
|
||||||
pm.inventory_value = COALESCE(pf.inventory_value, 0),
|
)
|
||||||
pm.total_revenue = COALESCE(pf.total_revenue, 0),
|
)
|
||||||
pm.cost_of_goods_sold = COALESCE(pf.cost_of_goods_sold, 0),
|
ORDER BY p.pid
|
||||||
pm.gross_profit = COALESCE(pf.gross_profit, 0),
|
LIMIT ?
|
||||||
pm.gmroi = CASE
|
`, [lastPid, lastCalculationTime, lastCalculationTime, BATCH_SIZE]);
|
||||||
WHEN COALESCE(pf.inventory_value, 0) > 0 AND pf.active_days > 0 THEN
|
|
||||||
(COALESCE(pf.gross_profit, 0) * (365.0 / pf.active_days)) / COALESCE(pf.inventory_value, 0)
|
|
||||||
ELSE 0
|
|
||||||
END,
|
|
||||||
pm.last_calculated_at = CURRENT_TIMESTAMP
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.65);
|
if (batch.length === 0) break;
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Base financial metrics calculated, updating time aggregates',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
// Update financial metrics for this batch
|
||||||
processedProducts: processedCount,
|
await connection.query(`
|
||||||
processedOrders,
|
UPDATE product_metrics pm
|
||||||
processedPurchaseOrders: 0,
|
JOIN (
|
||||||
success
|
SELECT
|
||||||
};
|
p.pid,
|
||||||
|
p.cost_price * p.stock_quantity as inventory_value,
|
||||||
|
SUM(o.quantity * o.price) as total_revenue,
|
||||||
|
SUM(o.quantity * p.cost_price) as cost_of_goods_sold,
|
||||||
|
SUM(o.quantity * (o.price - p.cost_price)) as gross_profit,
|
||||||
|
COUNT(DISTINCT DATE(o.date)) as active_days
|
||||||
|
FROM products p
|
||||||
|
LEFT JOIN orders o ON p.pid = o.pid
|
||||||
|
AND o.canceled = false
|
||||||
|
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY)
|
||||||
|
WHERE p.pid IN (?)
|
||||||
|
GROUP BY p.pid
|
||||||
|
) fin ON pm.pid = fin.pid
|
||||||
|
SET
|
||||||
|
pm.inventory_value = COALESCE(fin.inventory_value, 0),
|
||||||
|
pm.total_revenue = COALESCE(fin.total_revenue, 0),
|
||||||
|
pm.cost_of_goods_sold = COALESCE(fin.cost_of_goods_sold, 0),
|
||||||
|
pm.gross_profit = COALESCE(fin.gross_profit, 0),
|
||||||
|
pm.gmroi = CASE
|
||||||
|
WHEN COALESCE(fin.inventory_value, 0) > 0 AND fin.active_days > 0
|
||||||
|
THEN (COALESCE(fin.gross_profit, 0) * (365.0 / fin.active_days)) / COALESCE(fin.inventory_value, 0)
|
||||||
|
ELSE 0
|
||||||
|
END,
|
||||||
|
pm.last_calculated_at = NOW()
|
||||||
|
`, [batch.map(row => row.pid)]);
|
||||||
|
|
||||||
// Update time-based aggregates with optimized query
|
lastPid = batch[batch.length - 1].pid;
|
||||||
await connection.query(`
|
myProcessedProducts += batch.length;
|
||||||
WITH monthly_financials AS (
|
|
||||||
SELECT
|
|
||||||
p.pid,
|
|
||||||
YEAR(o.date) as year,
|
|
||||||
MONTH(o.date) as month,
|
|
||||||
p.cost_price * p.stock_quantity as inventory_value,
|
|
||||||
SUM(o.quantity * (o.price - p.cost_price)) as gross_profit,
|
|
||||||
COUNT(DISTINCT DATE(o.date)) as active_days,
|
|
||||||
MIN(o.date) as period_start,
|
|
||||||
MAX(o.date) as period_end
|
|
||||||
FROM products p
|
|
||||||
LEFT JOIN orders o ON p.pid = o.pid
|
|
||||||
WHERE o.canceled = false
|
|
||||||
GROUP BY p.pid, YEAR(o.date), MONTH(o.date)
|
|
||||||
)
|
|
||||||
UPDATE product_time_aggregates pta
|
|
||||||
JOIN monthly_financials mf ON pta.pid = mf.pid
|
|
||||||
AND pta.year = mf.year
|
|
||||||
AND pta.month = mf.month
|
|
||||||
SET
|
|
||||||
pta.inventory_value = COALESCE(mf.inventory_value, 0),
|
|
||||||
pta.gmroi = CASE
|
|
||||||
WHEN COALESCE(mf.inventory_value, 0) > 0 AND mf.active_days > 0 THEN
|
|
||||||
(COALESCE(mf.gross_profit, 0) * (365.0 / mf.active_days)) / COALESCE(mf.inventory_value, 0)
|
|
||||||
ELSE 0
|
|
||||||
END
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.70);
|
outputProgress({
|
||||||
outputProgress({
|
status: 'running',
|
||||||
status: 'running',
|
operation: 'Processing financial metrics batch',
|
||||||
operation: 'Time-based aggregates updated',
|
current: processedCount + myProcessedProducts,
|
||||||
current: processedCount,
|
total: totalProducts,
|
||||||
total: totalProducts,
|
elapsed: formatElapsedTime(startTime),
|
||||||
elapsed: formatElapsedTime(startTime),
|
remaining: estimateRemaining(startTime, processedCount + myProcessedProducts, totalProducts),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
rate: calculateRate(startTime, processedCount + myProcessedProducts),
|
||||||
rate: calculateRate(startTime, processedCount),
|
percentage: (((processedCount + myProcessedProducts) / totalProducts) * 100).toFixed(1),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
timing: {
|
||||||
timing: {
|
start_time: new Date(startTime).toISOString(),
|
||||||
start_time: new Date(startTime).toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
}
|
||||||
}
|
});
|
||||||
});
|
}
|
||||||
|
|
||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
@@ -172,8 +164,8 @@ async function calculateFinancialMetrics(startTime, totalProducts, processedCoun
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts,
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -13,21 +13,34 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
let processedOrders = 0;
|
||||||
|
let myProcessedProducts = 0; // Track products processed *within this module*
|
||||||
const BATCH_SIZE = 5000;
|
const BATCH_SIZE = 5000;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'product_metrics'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
if (totalProducts === 0) {
|
||||||
|
console.log('No products need updating');
|
||||||
|
return {
|
||||||
|
processedProducts: myProcessedProducts,
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Skip flags are inherited from the parent scope
|
// Skip flags are inherited from the parent scope
|
||||||
const SKIP_PRODUCT_BASE_METRICS = 0;
|
const SKIP_PRODUCT_BASE_METRICS = 0;
|
||||||
const SKIP_PRODUCT_TIME_AGGREGATES = 0;
|
const SKIP_PRODUCT_TIME_AGGREGATES = 0;
|
||||||
|
|
||||||
// Get total product count if not provided
|
|
||||||
if (!totalProducts) {
|
|
||||||
const [productCount] = await connection.query('SELECT COUNT(*) as count FROM products');
|
|
||||||
totalProducts = productCount[0].count;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
global.outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
operation: 'Product metrics calculation cancelled',
|
operation: 'Product metrics calculation cancelled',
|
||||||
current: processedCount,
|
current: processedCount,
|
||||||
@@ -43,7 +56,7 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts,
|
||||||
processedOrders,
|
processedOrders,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
@@ -93,10 +106,39 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
processedOrders = orderCount[0].count;
|
processedOrders = orderCount[0].count;
|
||||||
|
|
||||||
// Clear temporary tables
|
// Clear temporary tables
|
||||||
await connection.query('TRUNCATE TABLE temp_sales_metrics');
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_sales_metrics');
|
||||||
await connection.query('TRUNCATE TABLE temp_purchase_metrics');
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_purchase_metrics');
|
||||||
|
|
||||||
// Populate temp_sales_metrics with base stats and sales averages
|
// Create optimized temporary tables with indexes
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_sales_metrics (
|
||||||
|
pid BIGINT NOT NULL,
|
||||||
|
daily_sales_avg DECIMAL(10,3),
|
||||||
|
weekly_sales_avg DECIMAL(10,3),
|
||||||
|
monthly_sales_avg DECIMAL(10,3),
|
||||||
|
total_revenue DECIMAL(10,2),
|
||||||
|
avg_margin_percent DECIMAL(5,2),
|
||||||
|
first_sale_date DATE,
|
||||||
|
last_sale_date DATE,
|
||||||
|
PRIMARY KEY (pid),
|
||||||
|
INDEX (daily_sales_avg),
|
||||||
|
INDEX (total_revenue)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_purchase_metrics (
|
||||||
|
pid BIGINT NOT NULL,
|
||||||
|
avg_lead_time_days DECIMAL(5,1),
|
||||||
|
last_purchase_date DATE,
|
||||||
|
first_received_date DATE,
|
||||||
|
last_received_date DATE,
|
||||||
|
PRIMARY KEY (pid),
|
||||||
|
INDEX (avg_lead_time_days)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Populate temp_sales_metrics with base stats and sales averages using FORCE INDEX
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO temp_sales_metrics
|
INSERT INTO temp_sales_metrics
|
||||||
SELECT
|
SELECT
|
||||||
@@ -113,13 +155,21 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
MIN(o.date) as first_sale_date,
|
MIN(o.date) as first_sale_date,
|
||||||
MAX(o.date) as last_sale_date
|
MAX(o.date) as last_sale_date
|
||||||
FROM products p
|
FROM products p
|
||||||
LEFT JOIN orders o ON p.pid = o.pid
|
FORCE INDEX (PRIMARY)
|
||||||
AND o.canceled = false
|
LEFT JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid
|
||||||
AND o.date >= DATE_SUB(CURDATE(), INTERVAL 90 DAY)
|
AND o.canceled = false
|
||||||
|
AND o.date >= DATE_SUB(CURDATE(), INTERVAL 90 DAY)
|
||||||
|
WHERE p.updated > ?
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM orders o2 FORCE INDEX (idx_orders_metrics)
|
||||||
|
WHERE o2.pid = p.pid
|
||||||
|
AND o2.canceled = false
|
||||||
|
AND o2.updated > ?
|
||||||
|
)
|
||||||
GROUP BY p.pid
|
GROUP BY p.pid
|
||||||
`);
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
|
||||||
// Populate temp_purchase_metrics
|
// Populate temp_purchase_metrics with optimized index usage
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO temp_purchase_metrics
|
INSERT INTO temp_purchase_metrics
|
||||||
SELECT
|
SELECT
|
||||||
@@ -129,21 +179,38 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
MIN(po.received_date) as first_received_date,
|
MIN(po.received_date) as first_received_date,
|
||||||
MAX(po.received_date) as last_received_date
|
MAX(po.received_date) as last_received_date
|
||||||
FROM products p
|
FROM products p
|
||||||
LEFT JOIN purchase_orders po ON p.pid = po.pid
|
FORCE INDEX (PRIMARY)
|
||||||
AND po.received_date IS NOT NULL
|
LEFT JOIN purchase_orders po FORCE INDEX (idx_po_metrics) ON p.pid = po.pid
|
||||||
AND po.date >= DATE_SUB(CURDATE(), INTERVAL 365 DAY)
|
AND po.received_date IS NOT NULL
|
||||||
|
AND po.date >= DATE_SUB(CURDATE(), INTERVAL 365 DAY)
|
||||||
|
WHERE p.updated > ?
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM purchase_orders po2 FORCE INDEX (idx_po_metrics)
|
||||||
|
WHERE po2.pid = p.pid
|
||||||
|
AND po2.updated > ?
|
||||||
|
)
|
||||||
GROUP BY p.pid
|
GROUP BY p.pid
|
||||||
`);
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
|
||||||
// Process updates in batches
|
// Process updates in batches, but only for affected products
|
||||||
let lastPid = 0;
|
let lastPid = 0;
|
||||||
while (true) {
|
while (true) {
|
||||||
if (isCancelled) break;
|
if (isCancelled) break;
|
||||||
|
|
||||||
const [batch] = await connection.query(
|
const [batch] = await connection.query(`
|
||||||
'SELECT pid FROM products WHERE pid > ? ORDER BY pid LIMIT ?',
|
SELECT DISTINCT p.pid
|
||||||
[lastPid, BATCH_SIZE]
|
FROM products p
|
||||||
);
|
LEFT JOIN orders o ON p.pid = o.pid AND o.updated > ?
|
||||||
|
LEFT JOIN purchase_orders po ON p.pid = po.pid AND po.updated > ?
|
||||||
|
WHERE p.pid > ?
|
||||||
|
AND (
|
||||||
|
p.updated > ?
|
||||||
|
OR o.pid IS NOT NULL
|
||||||
|
OR po.pid IS NOT NULL
|
||||||
|
)
|
||||||
|
ORDER BY p.pid
|
||||||
|
LIMIT ?
|
||||||
|
`, [lastCalculationTime, lastCalculationTime, lastPid, lastCalculationTime, BATCH_SIZE]);
|
||||||
|
|
||||||
if (batch.length === 0) break;
|
if (batch.length === 0) break;
|
||||||
|
|
||||||
@@ -152,8 +219,30 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
JOIN products p ON pm.pid = p.pid
|
JOIN products p ON pm.pid = p.pid
|
||||||
LEFT JOIN temp_sales_metrics sm ON pm.pid = sm.pid
|
LEFT JOIN temp_sales_metrics sm ON pm.pid = sm.pid
|
||||||
LEFT JOIN temp_purchase_metrics lm ON pm.pid = lm.pid
|
LEFT JOIN temp_purchase_metrics lm ON pm.pid = lm.pid
|
||||||
|
LEFT JOIN (
|
||||||
|
SELECT
|
||||||
|
sf.pid,
|
||||||
|
AVG(CASE
|
||||||
|
WHEN o.quantity > 0
|
||||||
|
THEN ABS(sf.forecast_units - o.quantity) / o.quantity * 100
|
||||||
|
ELSE 100
|
||||||
|
END) as avg_forecast_error,
|
||||||
|
AVG(CASE
|
||||||
|
WHEN o.quantity > 0
|
||||||
|
THEN (sf.forecast_units - o.quantity) / o.quantity * 100
|
||||||
|
ELSE 0
|
||||||
|
END) as avg_forecast_bias,
|
||||||
|
MAX(sf.forecast_date) as last_forecast_date
|
||||||
|
FROM sales_forecasts sf
|
||||||
|
JOIN orders o ON sf.pid = o.pid
|
||||||
|
AND DATE(o.date) = sf.forecast_date
|
||||||
|
WHERE o.canceled = false
|
||||||
|
AND sf.forecast_date >= DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY)
|
||||||
|
AND sf.pid IN (?)
|
||||||
|
GROUP BY sf.pid
|
||||||
|
) fa ON pm.pid = fa.pid
|
||||||
SET
|
SET
|
||||||
pm.inventory_value = p.stock_quantity * NULLIF(p.cost_price, 0),
|
pm.inventory_value = p.stock_quantity * p.cost_price,
|
||||||
pm.daily_sales_avg = COALESCE(sm.daily_sales_avg, 0),
|
pm.daily_sales_avg = COALESCE(sm.daily_sales_avg, 0),
|
||||||
pm.weekly_sales_avg = COALESCE(sm.weekly_sales_avg, 0),
|
pm.weekly_sales_avg = COALESCE(sm.weekly_sales_avg, 0),
|
||||||
pm.monthly_sales_avg = COALESCE(sm.monthly_sales_avg, 0),
|
pm.monthly_sales_avg = COALESCE(sm.monthly_sales_avg, 0),
|
||||||
@@ -162,79 +251,25 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
pm.first_sale_date = sm.first_sale_date,
|
pm.first_sale_date = sm.first_sale_date,
|
||||||
pm.last_sale_date = sm.last_sale_date,
|
pm.last_sale_date = sm.last_sale_date,
|
||||||
pm.avg_lead_time_days = COALESCE(lm.avg_lead_time_days, 30),
|
pm.avg_lead_time_days = COALESCE(lm.avg_lead_time_days, 30),
|
||||||
pm.days_of_inventory = CASE
|
pm.forecast_accuracy = GREATEST(0, 100 - LEAST(fa.avg_forecast_error, 100)),
|
||||||
WHEN COALESCE(sm.daily_sales_avg, 0) > 0
|
pm.forecast_bias = GREATEST(-100, LEAST(fa.avg_forecast_bias, 100)),
|
||||||
THEN FLOOR(p.stock_quantity / NULLIF(sm.daily_sales_avg, 0))
|
pm.last_forecast_date = fa.last_forecast_date,
|
||||||
ELSE NULL
|
|
||||||
END,
|
|
||||||
pm.weeks_of_inventory = CASE
|
|
||||||
WHEN COALESCE(sm.weekly_sales_avg, 0) > 0
|
|
||||||
THEN FLOOR(p.stock_quantity / NULLIF(sm.weekly_sales_avg, 0))
|
|
||||||
ELSE NULL
|
|
||||||
END,
|
|
||||||
pm.stock_status = CASE
|
|
||||||
WHEN p.stock_quantity <= 0 THEN 'Out of Stock'
|
|
||||||
WHEN COALESCE(sm.daily_sales_avg, 0) = 0 AND p.stock_quantity <= ? THEN 'Low Stock'
|
|
||||||
WHEN COALESCE(sm.daily_sales_avg, 0) = 0 THEN 'In Stock'
|
|
||||||
WHEN p.stock_quantity / NULLIF(sm.daily_sales_avg, 0) <= ? THEN 'Critical'
|
|
||||||
WHEN p.stock_quantity / NULLIF(sm.daily_sales_avg, 0) <= ? THEN 'Reorder'
|
|
||||||
WHEN p.stock_quantity / NULLIF(sm.daily_sales_avg, 0) > ? THEN 'Overstocked'
|
|
||||||
ELSE 'Healthy'
|
|
||||||
END,
|
|
||||||
pm.safety_stock = CASE
|
|
||||||
WHEN COALESCE(sm.daily_sales_avg, 0) > 0 THEN
|
|
||||||
CEIL(sm.daily_sales_avg * SQRT(COALESCE(lm.avg_lead_time_days, 30)) * 1.96)
|
|
||||||
ELSE ?
|
|
||||||
END,
|
|
||||||
pm.reorder_point = CASE
|
|
||||||
WHEN COALESCE(sm.daily_sales_avg, 0) > 0 THEN
|
|
||||||
CEIL(sm.daily_sales_avg * COALESCE(lm.avg_lead_time_days, 30)) +
|
|
||||||
CEIL(sm.daily_sales_avg * SQRT(COALESCE(lm.avg_lead_time_days, 30)) * 1.96)
|
|
||||||
ELSE ?
|
|
||||||
END,
|
|
||||||
pm.reorder_qty = CASE
|
|
||||||
WHEN COALESCE(sm.daily_sales_avg, 0) > 0 AND NULLIF(p.cost_price, 0) IS NOT NULL THEN
|
|
||||||
GREATEST(
|
|
||||||
CEIL(SQRT((2 * (sm.daily_sales_avg * 365) * 25) / (NULLIF(p.cost_price, 0) * 0.25))),
|
|
||||||
?
|
|
||||||
)
|
|
||||||
ELSE ?
|
|
||||||
END,
|
|
||||||
pm.overstocked_amt = CASE
|
|
||||||
WHEN p.stock_quantity / NULLIF(sm.daily_sales_avg, 0) > ?
|
|
||||||
THEN GREATEST(0, p.stock_quantity - CEIL(sm.daily_sales_avg * ?))
|
|
||||||
ELSE 0
|
|
||||||
END,
|
|
||||||
pm.last_calculated_at = NOW()
|
pm.last_calculated_at = NOW()
|
||||||
WHERE p.pid IN (${batch.map(() => '?').join(',')})
|
WHERE p.pid IN (?)
|
||||||
`,
|
`, [batch.map(row => row.pid), batch.map(row => row.pid)]);
|
||||||
[
|
|
||||||
defaultThresholds.low_stock_threshold,
|
|
||||||
defaultThresholds.critical_days,
|
|
||||||
defaultThresholds.reorder_days,
|
|
||||||
defaultThresholds.overstock_days,
|
|
||||||
defaultThresholds.low_stock_threshold,
|
|
||||||
defaultThresholds.low_stock_threshold,
|
|
||||||
defaultThresholds.low_stock_threshold,
|
|
||||||
defaultThresholds.low_stock_threshold,
|
|
||||||
defaultThresholds.overstock_days,
|
|
||||||
defaultThresholds.overstock_days,
|
|
||||||
...batch.map(row => row.pid)
|
|
||||||
]
|
|
||||||
);
|
|
||||||
|
|
||||||
lastPid = batch[batch.length - 1].pid;
|
lastPid = batch[batch.length - 1].pid;
|
||||||
processedCount += batch.length;
|
myProcessedProducts += batch.length; // Increment the *module's* count
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Processing base metrics batch',
|
operation: 'Processing base metrics batch',
|
||||||
current: processedCount,
|
current: processedCount + myProcessedProducts, // Show cumulative progress
|
||||||
total: totalProducts,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
remaining: estimateRemaining(startTime, processedCount + myProcessedProducts, totalProducts),
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount + myProcessedProducts),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: (((processedCount + myProcessedProducts) / totalProducts) * 100).toFixed(1),
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -296,12 +331,12 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting product time aggregates calculation',
|
operation: 'Starting product time aggregates calculation',
|
||||||
current: processedCount || 0,
|
current: processedCount,
|
||||||
total: totalProducts || 0,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount || 0, totalProducts || 0),
|
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
||||||
rate: calculateRate(startTime, processedCount || 0),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: (((processedCount || 0) / (totalProducts || 1)) * 100).toFixed(1),
|
percentage: (((processedCount) / (totalProducts || 1)) * 100).toFixed(1),
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -363,12 +398,12 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Product time aggregates calculated',
|
operation: 'Product time aggregates calculated',
|
||||||
current: processedCount || 0,
|
current: processedCount,
|
||||||
total: totalProducts || 0,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount || 0, totalProducts || 0),
|
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
||||||
rate: calculateRate(startTime, processedCount || 0),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: (((processedCount || 0) / (totalProducts || 1)) * 100).toFixed(1),
|
percentage: (((processedCount) / (totalProducts || 1)) * 100).toFixed(1),
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -380,12 +415,12 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Skipping product time aggregates calculation',
|
operation: 'Skipping product time aggregates calculation',
|
||||||
current: processedCount || 0,
|
current: processedCount,
|
||||||
total: totalProducts || 0,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount || 0, totalProducts || 0),
|
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
||||||
rate: calculateRate(startTime, processedCount || 0),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: (((processedCount || 0) / (totalProducts || 1)) * 100).toFixed(1),
|
percentage: (((processedCount) / (totalProducts || 1)) * 100).toFixed(1),
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -414,7 +449,7 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
if (isCancelled) return {
|
if (isCancelled) return {
|
||||||
processedProducts: processedCount,
|
processedProducts: processedCount,
|
||||||
processedOrders,
|
processedOrders,
|
||||||
processedPurchaseOrders: 0, // This module doesn't process POs
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -475,7 +510,7 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
if (isCancelled) return {
|
if (isCancelled) return {
|
||||||
processedProducts: processedCount,
|
processedProducts: processedCount,
|
||||||
processedOrders,
|
processedOrders,
|
||||||
processedPurchaseOrders: 0, // This module doesn't process POs
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -547,7 +582,7 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
|
|
||||||
// Update calculate_status
|
// Update calculate_status with current timestamp
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
||||||
VALUES ('product_metrics', NOW())
|
VALUES ('product_metrics', NOW())
|
||||||
@@ -555,9 +590,9 @@ async function calculateProductMetrics(startTime, totalProducts, processedCount
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount || 0,
|
processedProducts: processedCount,
|
||||||
processedOrders: processedOrders || 0,
|
processedOrders: processedOrders || 0,
|
||||||
processedPurchaseOrders: 0, // This module doesn't process POs
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -618,9 +653,9 @@ function calculateReorderQuantities(stock, stock_status, daily_sales_avg, avg_le
|
|||||||
if (daily_sales_avg > 0) {
|
if (daily_sales_avg > 0) {
|
||||||
const annual_demand = daily_sales_avg * 365;
|
const annual_demand = daily_sales_avg * 365;
|
||||||
const order_cost = 25; // Fixed cost per order
|
const order_cost = 25; // Fixed cost per order
|
||||||
const holding_cost = config.cost_price * 0.25; // 25% of unit cost as annual holding cost
|
const holding_cost_percent = 0.25; // 25% annual holding cost
|
||||||
|
|
||||||
reorder_qty = Math.ceil(Math.sqrt((2 * annual_demand * order_cost) / holding_cost));
|
reorder_qty = Math.ceil(Math.sqrt((2 * annual_demand * order_cost) / holding_cost_percent));
|
||||||
} else {
|
} else {
|
||||||
// If no sales data, use a basic calculation
|
// If no sales data, use a basic calculation
|
||||||
reorder_qty = Math.max(safety_stock, config.low_stock_threshold);
|
reorder_qty = Math.max(safety_stock, config.low_stock_threshold);
|
||||||
|
|||||||
@@ -4,13 +4,45 @@ const { getConnection } = require('./utils/db');
|
|||||||
async function calculateSalesForecasts(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
async function calculateSalesForecasts(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
let myProcessedProducts = 0; // Track products processed *within this module*
|
||||||
|
const BATCH_SIZE = 5000;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'sales_forecasts'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
// Get total count of products needing updates
|
||||||
|
const [productCount] = await connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT p.pid) as count
|
||||||
|
FROM products p
|
||||||
|
LEFT JOIN orders o ON p.pid = o.pid AND o.updated > ?
|
||||||
|
WHERE p.visible = true
|
||||||
|
AND (
|
||||||
|
p.updated > ?
|
||||||
|
OR o.id IS NOT NULL
|
||||||
|
)
|
||||||
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
const totalProductsToUpdate = productCount[0].count;
|
||||||
|
|
||||||
|
if (totalProductsToUpdate === 0) {
|
||||||
|
console.log('No products need forecast updates');
|
||||||
|
return {
|
||||||
|
processedProducts: 0,
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
operation: 'Sales forecasts calculation cancelled',
|
operation: 'Sales forecast calculation cancelled',
|
||||||
current: processedCount,
|
current: processedCount,
|
||||||
total: totalProducts,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
@@ -24,31 +56,22 @@ async function calculateSalesForecasts(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts,
|
||||||
processedOrders: 0,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get order count that will be processed
|
|
||||||
const [orderCount] = await connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM orders o
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY)
|
|
||||||
`);
|
|
||||||
processedOrders = orderCount[0].count;
|
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting sales forecasts calculation',
|
operation: 'Starting sales forecast calculation',
|
||||||
current: processedCount,
|
current: processedCount,
|
||||||
total: totalProducts,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalProductsToUpdate) * 100).toFixed(1),
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -56,365 +79,176 @@ async function calculateSalesForecasts(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// First, create a temporary table for forecast dates
|
// Process in batches
|
||||||
await connection.query(`
|
let lastPid = '';
|
||||||
CREATE TEMPORARY TABLE IF NOT EXISTS temp_forecast_dates (
|
while (true) {
|
||||||
forecast_date DATE,
|
if (isCancelled) break;
|
||||||
day_of_week INT,
|
|
||||||
month INT,
|
|
||||||
PRIMARY KEY (forecast_date)
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
await connection.query(`
|
const [batch] = await connection.query(`
|
||||||
INSERT INTO temp_forecast_dates
|
SELECT DISTINCT p.pid
|
||||||
SELECT
|
FROM products p
|
||||||
DATE_ADD(CURRENT_DATE, INTERVAL n DAY) as forecast_date,
|
FORCE INDEX (PRIMARY)
|
||||||
DAYOFWEEK(DATE_ADD(CURRENT_DATE, INTERVAL n DAY)) as day_of_week,
|
LEFT JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid AND o.updated > ?
|
||||||
MONTH(DATE_ADD(CURRENT_DATE, INTERVAL n DAY)) as month
|
WHERE p.visible = true
|
||||||
FROM (
|
AND p.pid > ?
|
||||||
SELECT a.N + b.N * 10 as n
|
AND (
|
||||||
FROM
|
p.updated > ?
|
||||||
(SELECT 0 as N UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION
|
OR o.id IS NOT NULL
|
||||||
SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) a,
|
)
|
||||||
(SELECT 0 as N UNION SELECT 1 UNION SELECT 2) b
|
ORDER BY p.pid
|
||||||
ORDER BY n
|
LIMIT ?
|
||||||
LIMIT 31
|
`, [lastCalculationTime, lastPid, lastCalculationTime, BATCH_SIZE]);
|
||||||
) numbers
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.92);
|
if (batch.length === 0) break;
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Forecast dates prepared, calculating daily sales stats',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
// Create optimized temporary tables with indexes
|
||||||
processedProducts: processedCount,
|
await connection.query(`
|
||||||
processedOrders,
|
CREATE TEMPORARY TABLE temp_historical_sales (
|
||||||
processedPurchaseOrders: 0,
|
pid BIGINT NOT NULL,
|
||||||
success
|
sale_date DATE NOT NULL,
|
||||||
};
|
daily_quantity INT,
|
||||||
|
daily_revenue DECIMAL(15,2),
|
||||||
|
PRIMARY KEY (pid, sale_date),
|
||||||
|
INDEX (sale_date)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
// Create temporary table for daily sales stats
|
await connection.query(`
|
||||||
await connection.query(`
|
CREATE TEMPORARY TABLE temp_sales_stats (
|
||||||
CREATE TEMPORARY TABLE IF NOT EXISTS temp_daily_sales AS
|
pid BIGINT NOT NULL,
|
||||||
SELECT
|
avg_daily_units DECIMAL(10,2),
|
||||||
o.pid,
|
avg_daily_revenue DECIMAL(15,2),
|
||||||
DAYOFWEEK(o.date) as day_of_week,
|
std_daily_units DECIMAL(10,2),
|
||||||
SUM(o.quantity) as daily_quantity,
|
days_with_sales INT,
|
||||||
SUM(o.price * o.quantity) as daily_revenue,
|
first_sale DATE,
|
||||||
COUNT(DISTINCT DATE(o.date)) as day_count
|
last_sale DATE,
|
||||||
FROM orders o
|
PRIMARY KEY (pid)
|
||||||
WHERE o.canceled = false
|
) ENGINE=MEMORY
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY)
|
`);
|
||||||
GROUP BY o.pid, DAYOFWEEK(o.date)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.94);
|
await connection.query(`
|
||||||
outputProgress({
|
CREATE TEMPORARY TABLE temp_recent_stats (
|
||||||
status: 'running',
|
pid BIGINT NOT NULL,
|
||||||
operation: 'Daily sales stats calculated, preparing product stats',
|
recent_avg_units DECIMAL(10,2),
|
||||||
current: processedCount,
|
recent_avg_revenue DECIMAL(15,2),
|
||||||
total: totalProducts,
|
PRIMARY KEY (pid)
|
||||||
elapsed: formatElapsedTime(startTime),
|
) ENGINE=MEMORY
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
`);
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
// Populate historical sales with optimized index usage
|
||||||
processedProducts: processedCount,
|
await connection.query(`
|
||||||
processedOrders,
|
INSERT INTO temp_historical_sales
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Create temporary table for product stats
|
|
||||||
await connection.query(`
|
|
||||||
CREATE TEMPORARY TABLE IF NOT EXISTS temp_product_stats AS
|
|
||||||
SELECT
|
|
||||||
pid,
|
|
||||||
AVG(daily_revenue) as overall_avg_revenue,
|
|
||||||
SUM(day_count) as total_days
|
|
||||||
FROM temp_daily_sales
|
|
||||||
GROUP BY pid
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.96);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Product stats prepared, calculating product-level forecasts',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Calculate product-level forecasts
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO sales_forecasts (
|
|
||||||
pid,
|
|
||||||
forecast_date,
|
|
||||||
forecast_units,
|
|
||||||
forecast_revenue,
|
|
||||||
confidence_level,
|
|
||||||
last_calculated_at
|
|
||||||
)
|
|
||||||
WITH daily_stats AS (
|
|
||||||
SELECT
|
SELECT
|
||||||
ds.pid,
|
o.pid,
|
||||||
AVG(ds.daily_quantity) as avg_daily_qty,
|
DATE(o.date) as sale_date,
|
||||||
STDDEV(ds.daily_quantity) as std_daily_qty,
|
SUM(o.quantity) as daily_quantity,
|
||||||
COUNT(DISTINCT ds.day_count) as data_points,
|
SUM(o.quantity * o.price) as daily_revenue
|
||||||
SUM(ds.day_count) as total_days,
|
FROM orders o
|
||||||
AVG(ds.daily_revenue) as avg_daily_revenue,
|
FORCE INDEX (idx_orders_metrics)
|
||||||
STDDEV(ds.daily_revenue) as std_daily_revenue,
|
WHERE o.canceled = false
|
||||||
MIN(ds.daily_quantity) as min_daily_qty,
|
AND o.pid IN (?)
|
||||||
MAX(ds.daily_quantity) as max_daily_qty,
|
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 180 DAY)
|
||||||
-- Calculate variance without using LAG
|
GROUP BY o.pid, DATE(o.date)
|
||||||
COALESCE(
|
`, [batch.map(row => row.pid)]);
|
||||||
STDDEV(ds.daily_quantity) / NULLIF(AVG(ds.daily_quantity), 0),
|
|
||||||
0
|
|
||||||
) as daily_variance_ratio
|
|
||||||
FROM temp_daily_sales ds
|
|
||||||
GROUP BY ds.pid
|
|
||||||
HAVING AVG(ds.daily_quantity) > 0
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
ds.pid,
|
|
||||||
fd.forecast_date,
|
|
||||||
GREATEST(0,
|
|
||||||
ROUND(
|
|
||||||
ds.avg_daily_qty *
|
|
||||||
(1 + COALESCE(sf.seasonality_factor, 0)) *
|
|
||||||
CASE
|
|
||||||
WHEN ds.std_daily_qty / NULLIF(ds.avg_daily_qty, 0) > 1.5 THEN 0.85
|
|
||||||
WHEN ds.std_daily_qty / NULLIF(ds.avg_daily_qty, 0) > 1.0 THEN 0.9
|
|
||||||
WHEN ds.std_daily_qty / NULLIF(ds.avg_daily_qty, 0) > 0.5 THEN 0.95
|
|
||||||
ELSE 1.0
|
|
||||||
END,
|
|
||||||
2
|
|
||||||
)
|
|
||||||
) as forecast_units,
|
|
||||||
GREATEST(0,
|
|
||||||
ROUND(
|
|
||||||
COALESCE(
|
|
||||||
CASE
|
|
||||||
WHEN ds.data_points >= 4 THEN ds.avg_daily_revenue
|
|
||||||
ELSE ps.overall_avg_revenue
|
|
||||||
END *
|
|
||||||
(1 + COALESCE(sf.seasonality_factor, 0)) *
|
|
||||||
CASE
|
|
||||||
WHEN ds.std_daily_revenue / NULLIF(ds.avg_daily_revenue, 0) > 1.5 THEN 0.85
|
|
||||||
WHEN ds.std_daily_revenue / NULLIF(ds.avg_daily_revenue, 0) > 1.0 THEN 0.9
|
|
||||||
WHEN ds.std_daily_revenue / NULLIF(ds.avg_daily_revenue, 0) > 0.5 THEN 0.95
|
|
||||||
ELSE 1.0
|
|
||||||
END,
|
|
||||||
0
|
|
||||||
),
|
|
||||||
2
|
|
||||||
)
|
|
||||||
) as forecast_revenue,
|
|
||||||
CASE
|
|
||||||
WHEN ds.total_days >= 60 AND ds.daily_variance_ratio < 0.5 THEN 90
|
|
||||||
WHEN ds.total_days >= 60 THEN 85
|
|
||||||
WHEN ds.total_days >= 30 AND ds.daily_variance_ratio < 0.5 THEN 80
|
|
||||||
WHEN ds.total_days >= 30 THEN 75
|
|
||||||
WHEN ds.total_days >= 14 AND ds.daily_variance_ratio < 0.5 THEN 70
|
|
||||||
WHEN ds.total_days >= 14 THEN 65
|
|
||||||
ELSE 60
|
|
||||||
END as confidence_level,
|
|
||||||
NOW() as last_calculated_at
|
|
||||||
FROM daily_stats ds
|
|
||||||
JOIN temp_product_stats ps ON ds.pid = ps.pid
|
|
||||||
CROSS JOIN temp_forecast_dates fd
|
|
||||||
LEFT JOIN sales_seasonality sf ON fd.month = sf.month
|
|
||||||
GROUP BY ds.pid, fd.forecast_date, ps.overall_avg_revenue, sf.seasonality_factor
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
forecast_units = VALUES(forecast_units),
|
|
||||||
forecast_revenue = VALUES(forecast_revenue),
|
|
||||||
confidence_level = VALUES(confidence_level),
|
|
||||||
last_calculated_at = NOW()
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.98);
|
// Combine sales stats and recent trend calculations
|
||||||
outputProgress({
|
await connection.query(`
|
||||||
status: 'running',
|
INSERT INTO temp_sales_stats
|
||||||
operation: 'Product forecasts calculated, preparing category stats',
|
SELECT
|
||||||
current: processedCount,
|
pid,
|
||||||
total: totalProducts,
|
AVG(daily_quantity) as avg_daily_units,
|
||||||
elapsed: formatElapsedTime(startTime),
|
AVG(daily_revenue) as avg_daily_revenue,
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
STDDEV(daily_quantity) as std_daily_units,
|
||||||
rate: calculateRate(startTime, processedCount),
|
COUNT(*) as days_with_sales,
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
MIN(sale_date) as first_sale,
|
||||||
timing: {
|
MAX(sale_date) as last_sale
|
||||||
start_time: new Date(startTime).toISOString(),
|
FROM temp_historical_sales
|
||||||
end_time: new Date().toISOString(),
|
GROUP BY pid
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
`);
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
// Calculate recent averages
|
||||||
processedProducts: processedCount,
|
await connection.query(`
|
||||||
processedOrders,
|
INSERT INTO temp_recent_stats
|
||||||
processedPurchaseOrders: 0,
|
SELECT
|
||||||
success
|
pid,
|
||||||
};
|
AVG(daily_quantity) as recent_avg_units,
|
||||||
|
AVG(daily_revenue) as recent_avg_revenue
|
||||||
|
FROM temp_historical_sales
|
||||||
|
WHERE sale_date >= DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY)
|
||||||
|
GROUP BY pid
|
||||||
|
`);
|
||||||
|
|
||||||
// Create temporary table for category stats
|
// Generate forecasts using temp tables - optimized version
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
CREATE TEMPORARY TABLE IF NOT EXISTS temp_category_sales AS
|
REPLACE INTO sales_forecasts
|
||||||
SELECT
|
(pid, forecast_date, forecast_units, forecast_revenue, confidence_level, last_calculated_at)
|
||||||
pc.cat_id,
|
SELECT
|
||||||
DAYOFWEEK(o.date) as day_of_week,
|
s.pid,
|
||||||
SUM(o.quantity) as daily_quantity,
|
DATE_ADD(CURRENT_DATE, INTERVAL n.days DAY),
|
||||||
SUM(o.price * o.quantity) as daily_revenue,
|
GREATEST(0, ROUND(
|
||||||
COUNT(DISTINCT DATE(o.date)) as day_count
|
|
||||||
FROM orders o
|
|
||||||
JOIN product_categories pc ON o.pid = pc.pid
|
|
||||||
WHERE o.canceled = false
|
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 90 DAY)
|
|
||||||
GROUP BY pc.cat_id, DAYOFWEEK(o.date)
|
|
||||||
`);
|
|
||||||
|
|
||||||
await connection.query(`
|
|
||||||
CREATE TEMPORARY TABLE IF NOT EXISTS temp_category_stats AS
|
|
||||||
SELECT
|
|
||||||
cat_id,
|
|
||||||
AVG(daily_revenue) as overall_avg_revenue,
|
|
||||||
SUM(day_count) as total_days
|
|
||||||
FROM temp_category_sales
|
|
||||||
GROUP BY cat_id
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.99);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Category stats prepared, calculating category-level forecasts',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Calculate category-level forecasts
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO category_forecasts (
|
|
||||||
category_id,
|
|
||||||
forecast_date,
|
|
||||||
forecast_units,
|
|
||||||
forecast_revenue,
|
|
||||||
confidence_level,
|
|
||||||
last_calculated_at
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
cs.cat_id as category_id,
|
|
||||||
fd.forecast_date,
|
|
||||||
GREATEST(0,
|
|
||||||
AVG(cs.daily_quantity) *
|
|
||||||
(1 + COALESCE(sf.seasonality_factor, 0))
|
|
||||||
) as forecast_units,
|
|
||||||
GREATEST(0,
|
|
||||||
COALESCE(
|
|
||||||
CASE
|
CASE
|
||||||
WHEN SUM(cs.day_count) >= 4 THEN AVG(cs.daily_revenue)
|
WHEN s.days_with_sales >= n.days
|
||||||
ELSE ct.overall_avg_revenue
|
THEN COALESCE(r.recent_avg_units, s.avg_daily_units)
|
||||||
END *
|
ELSE s.avg_daily_units * (s.days_with_sales / n.days)
|
||||||
(1 + COALESCE(sf.seasonality_factor, 0)) *
|
END
|
||||||
(0.95 + (RAND() * 0.1)),
|
)),
|
||||||
0
|
GREATEST(0, ROUND(
|
||||||
)
|
CASE
|
||||||
) as forecast_revenue,
|
WHEN s.days_with_sales >= n.days
|
||||||
CASE
|
THEN COALESCE(r.recent_avg_revenue, s.avg_daily_revenue)
|
||||||
WHEN ct.total_days >= 60 THEN 90
|
ELSE s.avg_daily_revenue * (s.days_with_sales / n.days)
|
||||||
WHEN ct.total_days >= 30 THEN 80
|
END,
|
||||||
WHEN ct.total_days >= 14 THEN 70
|
2
|
||||||
ELSE 60
|
)),
|
||||||
END as confidence_level,
|
LEAST(100, GREATEST(0, ROUND(
|
||||||
NOW() as last_calculated_at
|
(s.days_with_sales / 180.0 * 50) + -- Up to 50 points for history length
|
||||||
FROM temp_category_sales cs
|
(CASE
|
||||||
JOIN temp_category_stats ct ON cs.cat_id = ct.cat_id
|
WHEN s.std_daily_units = 0 OR s.avg_daily_units = 0 THEN 0
|
||||||
CROSS JOIN temp_forecast_dates fd
|
WHEN (s.std_daily_units / s.avg_daily_units) <= 0.5 THEN 30
|
||||||
LEFT JOIN sales_seasonality sf ON fd.month = sf.month
|
WHEN (s.std_daily_units / s.avg_daily_units) <= 1.0 THEN 20
|
||||||
GROUP BY cs.cat_id, fd.forecast_date, ct.overall_avg_revenue, ct.total_days, sf.seasonality_factor
|
WHEN (s.std_daily_units / s.avg_daily_units) <= 2.0 THEN 10
|
||||||
HAVING AVG(cs.daily_quantity) > 0
|
ELSE 0
|
||||||
ON DUPLICATE KEY UPDATE
|
END) + -- Up to 30 points for consistency
|
||||||
forecast_units = VALUES(forecast_units),
|
(CASE
|
||||||
forecast_revenue = VALUES(forecast_revenue),
|
WHEN DATEDIFF(CURRENT_DATE, s.last_sale) <= 7 THEN 20
|
||||||
confidence_level = VALUES(confidence_level),
|
WHEN DATEDIFF(CURRENT_DATE, s.last_sale) <= 30 THEN 10
|
||||||
last_calculated_at = NOW()
|
ELSE 0
|
||||||
`);
|
END) -- Up to 20 points for recency
|
||||||
|
))),
|
||||||
|
NOW()
|
||||||
|
FROM temp_sales_stats s
|
||||||
|
LEFT JOIN temp_recent_stats r ON s.pid = r.pid
|
||||||
|
CROSS JOIN (
|
||||||
|
SELECT 30 as days
|
||||||
|
UNION SELECT 60
|
||||||
|
UNION SELECT 90
|
||||||
|
) n
|
||||||
|
`);
|
||||||
|
|
||||||
// Clean up temporary tables
|
// Clean up temp tables
|
||||||
await connection.query(`
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_historical_sales');
|
||||||
DROP TEMPORARY TABLE IF EXISTS temp_forecast_dates;
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_sales_stats');
|
||||||
DROP TEMPORARY TABLE IF EXISTS temp_daily_sales;
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_recent_stats');
|
||||||
DROP TEMPORARY TABLE IF EXISTS temp_product_stats;
|
|
||||||
DROP TEMPORARY TABLE IF EXISTS temp_category_sales;
|
|
||||||
DROP TEMPORARY TABLE IF EXISTS temp_category_stats;
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 1.0);
|
lastPid = batch[batch.length - 1].pid;
|
||||||
outputProgress({
|
myProcessedProducts += batch.length;
|
||||||
status: 'running',
|
|
||||||
operation: 'Category forecasts calculated and temporary tables cleaned up',
|
outputProgress({
|
||||||
current: processedCount,
|
status: 'running',
|
||||||
total: totalProducts,
|
operation: 'Processing sales forecast batch',
|
||||||
elapsed: formatElapsedTime(startTime),
|
current: processedCount + myProcessedProducts,
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
total: totalProducts,
|
||||||
rate: calculateRate(startTime, processedCount),
|
elapsed: formatElapsedTime(startTime),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
remaining: estimateRemaining(startTime, processedCount + myProcessedProducts, totalProducts),
|
||||||
timing: {
|
rate: calculateRate(startTime, processedCount + myProcessedProducts),
|
||||||
start_time: new Date(startTime).toISOString(),
|
percentage: (((processedCount + myProcessedProducts) / totalProducts) * 100).toFixed(1),
|
||||||
end_time: new Date().toISOString(),
|
timing: {
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
start_time: new Date(startTime).toISOString(),
|
||||||
}
|
end_time: new Date().toISOString(),
|
||||||
});
|
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
@@ -427,8 +261,8 @@ async function calculateSalesForecasts(startTime, totalProducts, processedCount
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts,
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -4,14 +4,35 @@ const { getConnection } = require('./utils/db');
|
|||||||
async function calculateTimeAggregates(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
async function calculateTimeAggregates(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
const BATCH_SIZE = 5000;
|
||||||
|
let myProcessedProducts = 0; // Track products processed *within this module*
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'time_aggregates'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
// We now receive totalProducts as an argument, so we don't need to query for it here.
|
||||||
|
|
||||||
|
if (totalProducts === 0) {
|
||||||
|
console.log('No products need time aggregate updates');
|
||||||
|
return {
|
||||||
|
processedProducts: 0,
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
operation: 'Time aggregates calculation cancelled',
|
operation: 'Time aggregates calculation cancelled',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: null,
|
remaining: null,
|
||||||
@@ -24,25 +45,17 @@ async function calculateTimeAggregates(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts, // Return only what *this* module processed
|
||||||
processedOrders: 0,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get order count that will be processed
|
|
||||||
const [orderCount] = await connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM orders o
|
|
||||||
WHERE o.canceled = false
|
|
||||||
`);
|
|
||||||
processedOrders = orderCount[0].count;
|
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting time aggregates calculation',
|
operation: 'Starting time aggregates calculation',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalProducts,
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
||||||
@@ -55,227 +68,204 @@ async function calculateTimeAggregates(startTime, totalProducts, processedCount
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Initial insert of time-based aggregates
|
// Process in batches
|
||||||
await connection.query(`
|
let lastPid = 0;
|
||||||
INSERT INTO product_time_aggregates (
|
while (true) {
|
||||||
pid,
|
if (isCancelled) break;
|
||||||
year,
|
|
||||||
month,
|
const [batch] = await connection.query(`
|
||||||
total_quantity_sold,
|
SELECT DISTINCT p.pid
|
||||||
total_revenue,
|
FROM products p
|
||||||
total_cost,
|
FORCE INDEX (PRIMARY)
|
||||||
order_count,
|
LEFT JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid
|
||||||
stock_received,
|
WHERE p.pid > ?
|
||||||
stock_ordered,
|
AND (
|
||||||
avg_price,
|
p.updated > ?
|
||||||
profit_margin,
|
OR EXISTS (
|
||||||
inventory_value,
|
SELECT 1
|
||||||
gmroi
|
FROM orders o2 FORCE INDEX (idx_orders_metrics)
|
||||||
)
|
WHERE o2.pid = p.pid
|
||||||
WITH monthly_sales AS (
|
AND o2.updated > ?
|
||||||
|
)
|
||||||
|
)
|
||||||
|
ORDER BY p.pid
|
||||||
|
LIMIT ?
|
||||||
|
`, [lastPid, lastCalculationTime, lastCalculationTime, BATCH_SIZE]);
|
||||||
|
|
||||||
|
if (batch.length === 0) break;
|
||||||
|
|
||||||
|
// Create temporary tables for better performance
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_order_stats');
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_purchase_stats');
|
||||||
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_time_aggregates');
|
||||||
|
|
||||||
|
// Create optimized temporary tables
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_order_stats (
|
||||||
|
pid BIGINT NOT NULL,
|
||||||
|
year INT NOT NULL,
|
||||||
|
month INT NOT NULL,
|
||||||
|
total_quantity_sold INT DEFAULT 0,
|
||||||
|
total_revenue DECIMAL(10,3) DEFAULT 0,
|
||||||
|
total_cost DECIMAL(10,3) DEFAULT 0,
|
||||||
|
order_count INT DEFAULT 0,
|
||||||
|
avg_price DECIMAL(10,3),
|
||||||
|
PRIMARY KEY (pid, year, month),
|
||||||
|
INDEX (pid)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_purchase_stats (
|
||||||
|
pid BIGINT NOT NULL,
|
||||||
|
year INT NOT NULL,
|
||||||
|
month INT NOT NULL,
|
||||||
|
stock_received INT DEFAULT 0,
|
||||||
|
stock_ordered INT DEFAULT 0,
|
||||||
|
PRIMARY KEY (pid, year, month),
|
||||||
|
INDEX (pid)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_time_aggregates (
|
||||||
|
pid BIGINT NOT NULL,
|
||||||
|
year INT NOT NULL,
|
||||||
|
month INT NOT NULL,
|
||||||
|
total_quantity_sold INT DEFAULT 0,
|
||||||
|
total_revenue DECIMAL(10,3) DEFAULT 0,
|
||||||
|
total_cost DECIMAL(10,3) DEFAULT 0,
|
||||||
|
order_count INT DEFAULT 0,
|
||||||
|
stock_received INT DEFAULT 0,
|
||||||
|
stock_ordered INT DEFAULT 0,
|
||||||
|
avg_price DECIMAL(10,3),
|
||||||
|
profit_margin DECIMAL(10,3),
|
||||||
|
inventory_value DECIMAL(10,3),
|
||||||
|
gmroi DECIMAL(10,3),
|
||||||
|
PRIMARY KEY (pid, year, month),
|
||||||
|
INDEX (pid)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Populate order stats
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_order_stats
|
||||||
SELECT
|
SELECT
|
||||||
o.pid,
|
p.pid,
|
||||||
YEAR(o.date) as year,
|
YEAR(o.date) as year,
|
||||||
MONTH(o.date) as month,
|
MONTH(o.date) as month,
|
||||||
SUM(o.quantity) as total_quantity_sold,
|
SUM(o.quantity) as total_quantity_sold,
|
||||||
SUM((o.price - COALESCE(o.discount, 0)) * o.quantity) as total_revenue,
|
SUM(o.quantity * o.price) as total_revenue,
|
||||||
SUM(COALESCE(p.cost_price, 0) * o.quantity) as total_cost,
|
SUM(o.quantity * p.cost_price) as total_cost,
|
||||||
COUNT(DISTINCT o.order_number) as order_count,
|
COUNT(DISTINCT o.order_number) as order_count,
|
||||||
AVG(o.price - COALESCE(o.discount, 0)) as avg_price,
|
AVG(o.price) as avg_price
|
||||||
|
FROM products p
|
||||||
|
FORCE INDEX (PRIMARY)
|
||||||
|
INNER JOIN orders o FORCE INDEX (idx_orders_metrics) ON p.pid = o.pid
|
||||||
|
WHERE p.pid IN (?)
|
||||||
|
AND o.canceled = false
|
||||||
|
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
||||||
|
GROUP BY p.pid, YEAR(o.date), MONTH(o.date)
|
||||||
|
`, [batch.map(row => row.pid)]);
|
||||||
|
|
||||||
|
// Populate purchase stats
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_purchase_stats
|
||||||
|
SELECT
|
||||||
|
p.pid,
|
||||||
|
YEAR(po.date) as year,
|
||||||
|
MONTH(po.date) as month,
|
||||||
|
COALESCE(SUM(CASE WHEN po.received_date IS NOT NULL THEN po.received ELSE 0 END), 0) as stock_received,
|
||||||
|
COALESCE(SUM(po.ordered), 0) as stock_ordered
|
||||||
|
FROM products p
|
||||||
|
FORCE INDEX (PRIMARY)
|
||||||
|
INNER JOIN purchase_orders po FORCE INDEX (idx_po_metrics) ON p.pid = po.pid
|
||||||
|
WHERE p.pid IN (?)
|
||||||
|
AND po.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
||||||
|
GROUP BY p.pid, YEAR(po.date), MONTH(po.date)
|
||||||
|
`, [batch.map(row => row.pid)]);
|
||||||
|
|
||||||
|
// Combine stats and calculate metrics
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_time_aggregates
|
||||||
|
SELECT
|
||||||
|
o.pid,
|
||||||
|
o.year,
|
||||||
|
o.month,
|
||||||
|
o.total_quantity_sold,
|
||||||
|
o.total_revenue,
|
||||||
|
o.total_cost,
|
||||||
|
o.order_count,
|
||||||
|
COALESCE(ps.stock_received, 0) as stock_received,
|
||||||
|
COALESCE(ps.stock_ordered, 0) as stock_ordered,
|
||||||
|
o.avg_price,
|
||||||
CASE
|
CASE
|
||||||
WHEN SUM((o.price - COALESCE(o.discount, 0)) * o.quantity) > 0
|
WHEN o.total_revenue > 0
|
||||||
THEN ((SUM((o.price - COALESCE(o.discount, 0)) * o.quantity) - SUM(COALESCE(p.cost_price, 0) * o.quantity))
|
THEN ((o.total_revenue - o.total_cost) / o.total_revenue) * 100
|
||||||
/ SUM((o.price - COALESCE(o.discount, 0)) * o.quantity)) * 100
|
|
||||||
ELSE 0
|
ELSE 0
|
||||||
END as profit_margin,
|
END as profit_margin,
|
||||||
p.cost_price * p.stock_quantity as inventory_value,
|
p.cost_price * p.stock_quantity as inventory_value,
|
||||||
COUNT(DISTINCT DATE(o.date)) as active_days
|
CASE
|
||||||
FROM orders o
|
WHEN (p.cost_price * p.stock_quantity) > 0
|
||||||
JOIN products p ON o.pid = p.pid
|
THEN (o.total_revenue - o.total_cost) / (p.cost_price * p.stock_quantity)
|
||||||
WHERE o.canceled = false
|
ELSE 0
|
||||||
GROUP BY o.pid, YEAR(o.date), MONTH(o.date)
|
END as gmroi
|
||||||
),
|
FROM temp_order_stats o
|
||||||
monthly_stock AS (
|
LEFT JOIN temp_purchase_stats ps ON o.pid = ps.pid AND o.year = ps.year AND o.month = ps.month
|
||||||
SELECT
|
JOIN products p FORCE INDEX (PRIMARY) ON o.pid = p.pid
|
||||||
pid,
|
`);
|
||||||
YEAR(date) as year,
|
|
||||||
MONTH(date) as month,
|
// Update final table with optimized batch update
|
||||||
SUM(received) as stock_received,
|
await connection.query(`
|
||||||
SUM(ordered) as stock_ordered
|
INSERT INTO product_time_aggregates (
|
||||||
FROM purchase_orders
|
pid, year, month,
|
||||||
GROUP BY pid, YEAR(date), MONTH(date)
|
total_quantity_sold, total_revenue, total_cost,
|
||||||
),
|
order_count, stock_received, stock_ordered,
|
||||||
base_products AS (
|
avg_price, profit_margin, inventory_value, gmroi
|
||||||
SELECT
|
|
||||||
p.pid,
|
|
||||||
p.cost_price * p.stock_quantity as inventory_value
|
|
||||||
FROM products p
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
COALESCE(s.pid, ms.pid) as pid,
|
|
||||||
COALESCE(s.year, ms.year) as year,
|
|
||||||
COALESCE(s.month, ms.month) as month,
|
|
||||||
COALESCE(s.total_quantity_sold, 0) as total_quantity_sold,
|
|
||||||
COALESCE(s.total_revenue, 0) as total_revenue,
|
|
||||||
COALESCE(s.total_cost, 0) as total_cost,
|
|
||||||
COALESCE(s.order_count, 0) as order_count,
|
|
||||||
COALESCE(ms.stock_received, 0) as stock_received,
|
|
||||||
COALESCE(ms.stock_ordered, 0) as stock_ordered,
|
|
||||||
COALESCE(s.avg_price, 0) as avg_price,
|
|
||||||
COALESCE(s.profit_margin, 0) as profit_margin,
|
|
||||||
COALESCE(s.inventory_value, bp.inventory_value, 0) as inventory_value,
|
|
||||||
CASE
|
|
||||||
WHEN COALESCE(s.inventory_value, bp.inventory_value, 0) > 0
|
|
||||||
AND COALESCE(s.active_days, 0) > 0
|
|
||||||
THEN (COALESCE(s.total_revenue - s.total_cost, 0) * (365.0 / s.active_days))
|
|
||||||
/ COALESCE(s.inventory_value, bp.inventory_value)
|
|
||||||
ELSE 0
|
|
||||||
END as gmroi
|
|
||||||
FROM (
|
|
||||||
SELECT * FROM monthly_sales s
|
|
||||||
UNION ALL
|
|
||||||
SELECT
|
|
||||||
ms.pid,
|
|
||||||
ms.year,
|
|
||||||
ms.month,
|
|
||||||
0 as total_quantity_sold,
|
|
||||||
0 as total_revenue,
|
|
||||||
0 as total_cost,
|
|
||||||
0 as order_count,
|
|
||||||
NULL as avg_price,
|
|
||||||
0 as profit_margin,
|
|
||||||
NULL as inventory_value,
|
|
||||||
0 as active_days
|
|
||||||
FROM monthly_stock ms
|
|
||||||
WHERE NOT EXISTS (
|
|
||||||
SELECT 1 FROM monthly_sales s2
|
|
||||||
WHERE s2.pid = ms.pid
|
|
||||||
AND s2.year = ms.year
|
|
||||||
AND s2.month = ms.month
|
|
||||||
)
|
)
|
||||||
) s
|
SELECT *
|
||||||
LEFT JOIN monthly_stock ms
|
FROM temp_time_aggregates
|
||||||
ON s.pid = ms.pid
|
ON DUPLICATE KEY UPDATE
|
||||||
AND s.year = ms.year
|
total_quantity_sold = VALUES(total_quantity_sold),
|
||||||
AND s.month = ms.month
|
total_revenue = VALUES(total_revenue),
|
||||||
JOIN base_products bp ON COALESCE(s.pid, ms.pid) = bp.pid
|
total_cost = VALUES(total_cost),
|
||||||
UNION
|
order_count = VALUES(order_count),
|
||||||
SELECT
|
stock_received = VALUES(stock_received),
|
||||||
ms.pid,
|
stock_ordered = VALUES(stock_ordered),
|
||||||
ms.year,
|
avg_price = VALUES(avg_price),
|
||||||
ms.month,
|
profit_margin = VALUES(profit_margin),
|
||||||
0 as total_quantity_sold,
|
inventory_value = VALUES(inventory_value),
|
||||||
0 as total_revenue,
|
gmroi = VALUES(gmroi)
|
||||||
0 as total_cost,
|
`);
|
||||||
0 as order_count,
|
|
||||||
ms.stock_received,
|
|
||||||
ms.stock_ordered,
|
|
||||||
0 as avg_price,
|
|
||||||
0 as profit_margin,
|
|
||||||
bp.inventory_value,
|
|
||||||
0 as gmroi
|
|
||||||
FROM monthly_stock ms
|
|
||||||
JOIN base_products bp ON ms.pid = bp.pid
|
|
||||||
WHERE NOT EXISTS (
|
|
||||||
SELECT 1 FROM (
|
|
||||||
SELECT * FROM monthly_sales
|
|
||||||
UNION ALL
|
|
||||||
SELECT
|
|
||||||
ms2.pid,
|
|
||||||
ms2.year,
|
|
||||||
ms2.month,
|
|
||||||
0, 0, 0, 0, NULL, 0, NULL, 0
|
|
||||||
FROM monthly_stock ms2
|
|
||||||
WHERE NOT EXISTS (
|
|
||||||
SELECT 1 FROM monthly_sales s2
|
|
||||||
WHERE s2.pid = ms2.pid
|
|
||||||
AND s2.year = ms2.year
|
|
||||||
AND s2.month = ms2.month
|
|
||||||
)
|
|
||||||
) s
|
|
||||||
WHERE s.pid = ms.pid
|
|
||||||
AND s.year = ms.year
|
|
||||||
AND s.month = ms.month
|
|
||||||
)
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
total_quantity_sold = VALUES(total_quantity_sold),
|
|
||||||
total_revenue = VALUES(total_revenue),
|
|
||||||
total_cost = VALUES(total_cost),
|
|
||||||
order_count = VALUES(order_count),
|
|
||||||
stock_received = VALUES(stock_received),
|
|
||||||
stock_ordered = VALUES(stock_ordered),
|
|
||||||
avg_price = VALUES(avg_price),
|
|
||||||
profit_margin = VALUES(profit_margin),
|
|
||||||
inventory_value = VALUES(inventory_value),
|
|
||||||
gmroi = VALUES(gmroi)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.60);
|
// Clean up temp tables
|
||||||
outputProgress({
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_order_stats');
|
||||||
status: 'running',
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_purchase_stats');
|
||||||
operation: 'Base time aggregates calculated, updating financial metrics',
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_time_aggregates');
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
lastPid = batch[batch.length - 1].pid;
|
||||||
processedProducts: processedCount,
|
myProcessedProducts += batch.length; // Increment *this module's* count
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders: 0,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Update with financial metrics
|
outputProgress({
|
||||||
await connection.query(`
|
status: 'running',
|
||||||
UPDATE product_time_aggregates pta
|
operation: 'Processing time aggregates batch',
|
||||||
JOIN (
|
current: processedCount + myProcessedProducts, // Show cumulative progress
|
||||||
SELECT
|
total: totalProducts,
|
||||||
p.pid,
|
elapsed: formatElapsedTime(startTime),
|
||||||
YEAR(o.date) as year,
|
remaining: estimateRemaining(startTime, processedCount + myProcessedProducts, totalProducts),
|
||||||
MONTH(o.date) as month,
|
rate: calculateRate(startTime, processedCount + myProcessedProducts),
|
||||||
p.cost_price * p.stock_quantity as inventory_value,
|
percentage: (((processedCount + myProcessedProducts) / totalProducts) * 100).toFixed(1),
|
||||||
SUM(o.quantity * (o.price - p.cost_price)) as gross_profit,
|
timing: {
|
||||||
COUNT(DISTINCT DATE(o.date)) as active_days
|
start_time: new Date(startTime).toISOString(),
|
||||||
FROM products p
|
end_time: new Date().toISOString(),
|
||||||
LEFT JOIN orders o ON p.pid = o.pid
|
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
WHERE o.canceled = false
|
}
|
||||||
GROUP BY p.pid, YEAR(o.date), MONTH(o.date)
|
});
|
||||||
) fin ON pta.pid = fin.pid
|
}
|
||||||
AND pta.year = fin.year
|
|
||||||
AND pta.month = fin.month
|
|
||||||
SET
|
|
||||||
pta.inventory_value = COALESCE(fin.inventory_value, 0)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.65);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Financial metrics updated',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
|
|
||||||
// Update calculate_status
|
// Update calculate_status
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
||||||
@@ -284,8 +274,8 @@ async function calculateTimeAggregates(startTime, totalProducts, processedCount
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: myProcessedProducts, // Return only what *this* module processed
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders: 0,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -4,20 +4,58 @@ const { getConnection } = require('./utils/db');
|
|||||||
async function calculateVendorMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
async function calculateVendorMetrics(startTime, totalProducts, processedCount = 0, isCancelled = false) {
|
||||||
const connection = await getConnection();
|
const connection = await getConnection();
|
||||||
let success = false;
|
let success = false;
|
||||||
let processedOrders = 0;
|
const BATCH_SIZE = 5000;
|
||||||
let processedPurchaseOrders = 0;
|
let myProcessedProducts = 0; // Not directly processing products, but we'll track vendors
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Get last calculation timestamp
|
||||||
|
const [lastCalc] = await connection.query(`
|
||||||
|
SELECT last_calculation_timestamp
|
||||||
|
FROM calculate_status
|
||||||
|
WHERE module_name = 'vendor_metrics'
|
||||||
|
`);
|
||||||
|
const lastCalculationTime = lastCalc[0]?.last_calculation_timestamp || '1970-01-01';
|
||||||
|
|
||||||
|
// Get total count of vendors needing updates using EXISTS for better performance
|
||||||
|
const [vendorCount] = await connection.query(`
|
||||||
|
SELECT COUNT(DISTINCT v.vendor) as count
|
||||||
|
FROM vendor_details v
|
||||||
|
WHERE v.status = 'active'
|
||||||
|
AND (
|
||||||
|
EXISTS (
|
||||||
|
SELECT 1 FROM products p
|
||||||
|
WHERE p.vendor = v.vendor
|
||||||
|
AND p.updated > ?
|
||||||
|
)
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM purchase_orders po
|
||||||
|
WHERE po.vendor = v.vendor
|
||||||
|
AND po.updated > ?
|
||||||
|
)
|
||||||
|
)
|
||||||
|
`, [lastCalculationTime, lastCalculationTime]);
|
||||||
|
const totalVendors = vendorCount[0].count; // Track total *vendors*
|
||||||
|
|
||||||
|
if (totalVendors === 0) {
|
||||||
|
console.log('No vendors need metric updates');
|
||||||
|
return {
|
||||||
|
processedProducts: 0, // No products directly processed
|
||||||
|
processedOrders: 0,
|
||||||
|
processedPurchaseOrders: 0,
|
||||||
|
success: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
if (isCancelled) {
|
if (isCancelled) {
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'cancelled',
|
status: 'cancelled',
|
||||||
operation: 'Vendor metrics calculation cancelled',
|
operation: 'Vendor metrics calculation cancelled',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value (for consistency)
|
||||||
total: totalProducts,
|
total: totalVendors, // Report total *vendors*
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: null,
|
remaining: null,
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalVendors) * 100).toFixed(1), // Base on vendors
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -25,38 +63,22 @@ async function calculateVendorMetrics(startTime, totalProducts, processedCount =
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: 0, // No products directly processed
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get counts of records that will be processed
|
|
||||||
const [[orderCount], [poCount]] = await Promise.all([
|
|
||||||
connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM orders o
|
|
||||||
WHERE o.canceled = false
|
|
||||||
`),
|
|
||||||
connection.query(`
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM purchase_orders po
|
|
||||||
WHERE po.status != 0
|
|
||||||
`)
|
|
||||||
]);
|
|
||||||
processedOrders = orderCount.count;
|
|
||||||
processedPurchaseOrders = poCount.count;
|
|
||||||
|
|
||||||
outputProgress({
|
outputProgress({
|
||||||
status: 'running',
|
status: 'running',
|
||||||
operation: 'Starting vendor metrics calculation',
|
operation: 'Starting vendor metrics calculation',
|
||||||
current: processedCount,
|
current: processedCount, // Use passed-in value
|
||||||
total: totalProducts,
|
total: totalVendors, // Report total *vendors*
|
||||||
elapsed: formatElapsedTime(startTime),
|
elapsed: formatElapsedTime(startTime),
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
remaining: estimateRemaining(startTime, processedCount, totalVendors),
|
||||||
rate: calculateRate(startTime, processedCount),
|
rate: calculateRate(startTime, processedCount),
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
percentage: ((processedCount / totalVendors) * 100).toFixed(1), // Base on vendors
|
||||||
timing: {
|
timing: {
|
||||||
start_time: new Date(startTime).toISOString(),
|
start_time: new Date(startTime).toISOString(),
|
||||||
end_time: new Date().toISOString(),
|
end_time: new Date().toISOString(),
|
||||||
@@ -64,282 +86,197 @@ async function calculateVendorMetrics(startTime, totalProducts, processedCount =
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// First ensure all vendors exist in vendor_details
|
// Process in batches
|
||||||
await connection.query(`
|
let lastVendor = '';
|
||||||
INSERT IGNORE INTO vendor_details (vendor, status, created_at, updated_at)
|
let processedVendors = 0; // Track processed vendors
|
||||||
SELECT DISTINCT
|
while (true) {
|
||||||
vendor,
|
if (isCancelled) break;
|
||||||
'active' as status,
|
|
||||||
NOW() as created_at,
|
|
||||||
NOW() as updated_at
|
|
||||||
FROM products
|
|
||||||
WHERE vendor IS NOT NULL
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.8);
|
// Get batch of vendors using EXISTS for better performance
|
||||||
outputProgress({
|
const [batch] = await connection.query(`
|
||||||
status: 'running',
|
SELECT DISTINCT v.vendor
|
||||||
operation: 'Vendor details updated, calculating metrics',
|
FROM vendor_details v
|
||||||
current: processedCount,
|
WHERE v.status = 'active'
|
||||||
total: totalProducts,
|
AND v.vendor > ?
|
||||||
elapsed: formatElapsedTime(startTime),
|
AND (
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
EXISTS (
|
||||||
rate: calculateRate(startTime, processedCount),
|
SELECT 1
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
FROM products p
|
||||||
timing: {
|
WHERE p.vendor = v.vendor
|
||||||
start_time: new Date(startTime).toISOString(),
|
AND p.updated > ?
|
||||||
end_time: new Date().toISOString(),
|
)
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
OR EXISTS (
|
||||||
}
|
SELECT 1
|
||||||
});
|
FROM purchase_orders po
|
||||||
|
WHERE po.vendor = v.vendor
|
||||||
|
AND po.updated > ?
|
||||||
|
)
|
||||||
|
)
|
||||||
|
ORDER BY v.vendor
|
||||||
|
LIMIT ?
|
||||||
|
`, [lastVendor, lastCalculationTime, lastCalculationTime, BATCH_SIZE]);
|
||||||
|
|
||||||
if (isCancelled) return {
|
if (batch.length === 0) break;
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Now calculate vendor metrics
|
// Create temporary tables with optimized structure and indexes
|
||||||
await connection.query(`
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_purchase_stats');
|
||||||
INSERT INTO vendor_metrics (
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_product_stats');
|
||||||
vendor,
|
|
||||||
total_revenue,
|
await connection.query(`
|
||||||
total_orders,
|
CREATE TEMPORARY TABLE temp_purchase_stats (
|
||||||
total_late_orders,
|
vendor VARCHAR(100) NOT NULL,
|
||||||
avg_lead_time_days,
|
avg_lead_time_days DECIMAL(10,2),
|
||||||
on_time_delivery_rate,
|
total_orders INT,
|
||||||
order_fill_rate,
|
total_late_orders INT,
|
||||||
avg_order_value,
|
total_purchase_value DECIMAL(15,2),
|
||||||
active_products,
|
avg_order_value DECIMAL(15,2),
|
||||||
total_products,
|
on_time_delivery_rate DECIMAL(5,2),
|
||||||
total_purchase_value,
|
order_fill_rate DECIMAL(5,2),
|
||||||
avg_margin_percent,
|
PRIMARY KEY (vendor),
|
||||||
status,
|
INDEX (total_orders),
|
||||||
last_calculated_at
|
INDEX (total_purchase_value)
|
||||||
)
|
) ENGINE=MEMORY
|
||||||
WITH vendor_sales AS (
|
`);
|
||||||
|
|
||||||
|
await connection.query(`
|
||||||
|
CREATE TEMPORARY TABLE temp_product_stats (
|
||||||
|
vendor VARCHAR(100) NOT NULL,
|
||||||
|
total_products INT,
|
||||||
|
active_products INT,
|
||||||
|
avg_margin_percent DECIMAL(5,2),
|
||||||
|
total_revenue DECIMAL(15,2),
|
||||||
|
PRIMARY KEY (vendor),
|
||||||
|
INDEX (total_products),
|
||||||
|
INDEX (total_revenue)
|
||||||
|
) ENGINE=MEMORY
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Populate purchase_stats temp table with optimized index usage
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_purchase_stats
|
||||||
|
SELECT
|
||||||
|
po.vendor,
|
||||||
|
AVG(DATEDIFF(po.received_date, po.date)) as avg_lead_time_days,
|
||||||
|
COUNT(DISTINCT po.po_id) as total_orders,
|
||||||
|
COUNT(CASE WHEN DATEDIFF(po.received_date, po.date) > 30 THEN 1 END) as total_late_orders,
|
||||||
|
SUM(po.ordered * po.po_cost_price) as total_purchase_value,
|
||||||
|
AVG(po.ordered * po.po_cost_price) as avg_order_value,
|
||||||
|
(COUNT(CASE WHEN DATEDIFF(po.received_date, po.date) <= 30 THEN 1 END) / COUNT(*)) * 100 as on_time_delivery_rate,
|
||||||
|
(SUM(LEAST(po.received, po.ordered)) / NULLIF(SUM(po.ordered), 0)) * 100 as order_fill_rate
|
||||||
|
FROM purchase_orders po
|
||||||
|
FORCE INDEX (idx_vendor)
|
||||||
|
WHERE po.vendor IN (?)
|
||||||
|
AND po.received_date IS NOT NULL
|
||||||
|
AND po.date >= DATE_SUB(CURRENT_DATE, INTERVAL 365 DAY)
|
||||||
|
AND po.updated > ?
|
||||||
|
GROUP BY po.vendor
|
||||||
|
`, [batch.map(row => row.vendor), lastCalculationTime]);
|
||||||
|
|
||||||
|
// Populate product stats with optimized index usage
|
||||||
|
await connection.query(`
|
||||||
|
INSERT INTO temp_product_stats
|
||||||
SELECT
|
SELECT
|
||||||
p.vendor,
|
p.vendor,
|
||||||
SUM(o.quantity * o.price) as total_revenue,
|
COUNT(DISTINCT p.pid) as product_count,
|
||||||
COUNT(DISTINCT o.id) as total_orders,
|
COUNT(DISTINCT CASE WHEN p.visible = true THEN p.pid END) as active_products,
|
||||||
COUNT(DISTINCT p.pid) as active_products,
|
AVG(pm.avg_margin_percent) as avg_margin,
|
||||||
SUM(o.quantity * (o.price - p.cost_price)) as total_margin
|
SUM(pm.total_revenue) as total_revenue
|
||||||
FROM products p
|
FROM products p
|
||||||
JOIN orders o ON p.pid = o.pid
|
FORCE INDEX (idx_vendor)
|
||||||
WHERE o.canceled = false
|
LEFT JOIN product_metrics pm FORCE INDEX (PRIMARY) ON p.pid = pm.pid
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
WHERE p.vendor IN (?)
|
||||||
|
AND (
|
||||||
|
p.updated > ?
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM orders o FORCE INDEX (idx_orders_metrics)
|
||||||
|
WHERE o.pid = p.pid
|
||||||
|
AND o.updated > ?
|
||||||
|
)
|
||||||
|
)
|
||||||
GROUP BY p.vendor
|
GROUP BY p.vendor
|
||||||
),
|
`, [batch.map(row => row.vendor), lastCalculationTime, lastCalculationTime]);
|
||||||
vendor_po AS (
|
|
||||||
SELECT
|
// Update metrics using temp tables with optimized join order
|
||||||
p.vendor,
|
await connection.query(`
|
||||||
COUNT(DISTINCT CASE WHEN po.receiving_status = 40 THEN po.id END) as received_orders,
|
INSERT INTO vendor_metrics (
|
||||||
COUNT(DISTINCT po.id) as total_orders,
|
|
||||||
AVG(CASE
|
|
||||||
WHEN po.receiving_status = 40
|
|
||||||
THEN DATEDIFF(po.received_date, po.date)
|
|
||||||
END) as avg_lead_time_days,
|
|
||||||
SUM(po.ordered * po.po_cost_price) as total_purchase_value
|
|
||||||
FROM products p
|
|
||||||
JOIN purchase_orders po ON p.pid = po.pid
|
|
||||||
WHERE po.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
|
||||||
GROUP BY p.vendor
|
|
||||||
),
|
|
||||||
vendor_products AS (
|
|
||||||
SELECT
|
|
||||||
vendor,
|
vendor,
|
||||||
COUNT(DISTINCT pid) as total_products
|
avg_lead_time_days,
|
||||||
FROM products
|
on_time_delivery_rate,
|
||||||
GROUP BY vendor
|
order_fill_rate,
|
||||||
)
|
total_orders,
|
||||||
SELECT
|
total_late_orders,
|
||||||
vs.vendor,
|
total_purchase_value,
|
||||||
COALESCE(vs.total_revenue, 0) as total_revenue,
|
avg_order_value,
|
||||||
COALESCE(vp.total_orders, 0) as total_orders,
|
active_products,
|
||||||
COALESCE(vp.total_orders - vp.received_orders, 0) as total_late_orders,
|
total_products,
|
||||||
COALESCE(vp.avg_lead_time_days, 0) as avg_lead_time_days,
|
total_revenue,
|
||||||
CASE
|
avg_margin_percent,
|
||||||
WHEN vp.total_orders > 0
|
status,
|
||||||
THEN (vp.received_orders / vp.total_orders) * 100
|
last_calculated_at
|
||||||
ELSE 0
|
)
|
||||||
END as on_time_delivery_rate,
|
|
||||||
CASE
|
|
||||||
WHEN vp.total_orders > 0
|
|
||||||
THEN (vp.received_orders / vp.total_orders) * 100
|
|
||||||
ELSE 0
|
|
||||||
END as order_fill_rate,
|
|
||||||
CASE
|
|
||||||
WHEN vs.total_orders > 0
|
|
||||||
THEN vs.total_revenue / vs.total_orders
|
|
||||||
ELSE 0
|
|
||||||
END as avg_order_value,
|
|
||||||
COALESCE(vs.active_products, 0) as active_products,
|
|
||||||
COALESCE(vpr.total_products, 0) as total_products,
|
|
||||||
COALESCE(vp.total_purchase_value, 0) as total_purchase_value,
|
|
||||||
CASE
|
|
||||||
WHEN vs.total_revenue > 0
|
|
||||||
THEN (vs.total_margin / vs.total_revenue) * 100
|
|
||||||
ELSE 0
|
|
||||||
END as avg_margin_percent,
|
|
||||||
'active' as status,
|
|
||||||
NOW() as last_calculated_at
|
|
||||||
FROM vendor_sales vs
|
|
||||||
LEFT JOIN vendor_po vp ON vs.vendor = vp.vendor
|
|
||||||
LEFT JOIN vendor_products vpr ON vs.vendor = vpr.vendor
|
|
||||||
WHERE vs.vendor IS NOT NULL
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
total_revenue = VALUES(total_revenue),
|
|
||||||
total_orders = VALUES(total_orders),
|
|
||||||
total_late_orders = VALUES(total_late_orders),
|
|
||||||
avg_lead_time_days = VALUES(avg_lead_time_days),
|
|
||||||
on_time_delivery_rate = VALUES(on_time_delivery_rate),
|
|
||||||
order_fill_rate = VALUES(order_fill_rate),
|
|
||||||
avg_order_value = VALUES(avg_order_value),
|
|
||||||
active_products = VALUES(active_products),
|
|
||||||
total_products = VALUES(total_products),
|
|
||||||
total_purchase_value = VALUES(total_purchase_value),
|
|
||||||
avg_margin_percent = VALUES(avg_margin_percent),
|
|
||||||
status = VALUES(status),
|
|
||||||
last_calculated_at = VALUES(last_calculated_at)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.9);
|
|
||||||
outputProgress({
|
|
||||||
status: 'running',
|
|
||||||
operation: 'Vendor metrics calculated, updating time-based metrics',
|
|
||||||
current: processedCount,
|
|
||||||
total: totalProducts,
|
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
|
||||||
rate: calculateRate(startTime, processedCount),
|
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
|
||||||
timing: {
|
|
||||||
start_time: new Date(startTime).toISOString(),
|
|
||||||
end_time: new Date().toISOString(),
|
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
if (isCancelled) return {
|
|
||||||
processedProducts: processedCount,
|
|
||||||
processedOrders,
|
|
||||||
processedPurchaseOrders,
|
|
||||||
success
|
|
||||||
};
|
|
||||||
|
|
||||||
// Calculate time-based metrics
|
|
||||||
await connection.query(`
|
|
||||||
INSERT INTO vendor_time_metrics (
|
|
||||||
vendor,
|
|
||||||
year,
|
|
||||||
month,
|
|
||||||
total_orders,
|
|
||||||
late_orders,
|
|
||||||
avg_lead_time_days,
|
|
||||||
total_purchase_value,
|
|
||||||
total_revenue,
|
|
||||||
avg_margin_percent
|
|
||||||
)
|
|
||||||
WITH monthly_orders AS (
|
|
||||||
SELECT
|
SELECT
|
||||||
p.vendor,
|
v.vendor,
|
||||||
YEAR(o.date) as year,
|
COALESCE(ps.avg_lead_time_days, 0) as avg_lead_time_days,
|
||||||
MONTH(o.date) as month,
|
COALESCE(ps.on_time_delivery_rate, 0) as on_time_delivery_rate,
|
||||||
COUNT(DISTINCT o.id) as total_orders,
|
COALESCE(ps.order_fill_rate, 0) as order_fill_rate,
|
||||||
SUM(o.quantity * o.price) as total_revenue,
|
COALESCE(ps.total_orders, 0) as total_orders,
|
||||||
SUM(o.quantity * (o.price - p.cost_price)) as total_margin
|
COALESCE(ps.total_late_orders, 0) as total_late_orders,
|
||||||
FROM products p
|
COALESCE(ps.total_purchase_value, 0) as total_purchase_value,
|
||||||
JOIN orders o ON p.pid = o.pid
|
COALESCE(ps.avg_order_value, 0) as avg_order_value,
|
||||||
WHERE o.canceled = false
|
COALESCE(prs.active_products, 0) as active_products,
|
||||||
AND o.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
COALESCE(prs.total_products, 0) as total_products,
|
||||||
AND p.vendor IS NOT NULL
|
COALESCE(prs.total_revenue, 0) as total_revenue,
|
||||||
GROUP BY p.vendor, YEAR(o.date), MONTH(o.date)
|
COALESCE(prs.avg_margin_percent, 0) as avg_margin_percent,
|
||||||
),
|
v.status,
|
||||||
monthly_po AS (
|
NOW() as last_calculated_at
|
||||||
SELECT
|
FROM vendor_details v
|
||||||
p.vendor,
|
FORCE INDEX (PRIMARY)
|
||||||
YEAR(po.date) as year,
|
LEFT JOIN temp_purchase_stats ps ON v.vendor = ps.vendor
|
||||||
MONTH(po.date) as month,
|
LEFT JOIN temp_product_stats prs ON v.vendor = prs.vendor
|
||||||
COUNT(DISTINCT po.id) as total_po,
|
WHERE v.vendor IN (?)
|
||||||
COUNT(DISTINCT CASE
|
ON DUPLICATE KEY UPDATE
|
||||||
WHEN po.receiving_status = 40 AND po.received_date > po.expected_date
|
avg_lead_time_days = VALUES(avg_lead_time_days),
|
||||||
THEN po.id
|
on_time_delivery_rate = VALUES(on_time_delivery_rate),
|
||||||
END) as late_orders,
|
order_fill_rate = VALUES(order_fill_rate),
|
||||||
AVG(CASE
|
total_orders = VALUES(total_orders),
|
||||||
WHEN po.receiving_status = 40
|
total_late_orders = VALUES(total_late_orders),
|
||||||
THEN DATEDIFF(po.received_date, po.date)
|
total_purchase_value = VALUES(total_purchase_value),
|
||||||
END) as avg_lead_time_days,
|
avg_order_value = VALUES(avg_order_value),
|
||||||
SUM(po.ordered * po.po_cost_price) as total_purchase_value
|
active_products = VALUES(active_products),
|
||||||
FROM products p
|
total_products = VALUES(total_products),
|
||||||
JOIN purchase_orders po ON p.pid = po.pid
|
total_revenue = VALUES(total_revenue),
|
||||||
WHERE po.date >= DATE_SUB(CURRENT_DATE, INTERVAL 12 MONTH)
|
avg_margin_percent = VALUES(avg_margin_percent),
|
||||||
AND p.vendor IS NOT NULL
|
status = VALUES(status),
|
||||||
GROUP BY p.vendor, YEAR(po.date), MONTH(po.date)
|
last_calculated_at = NOW()
|
||||||
)
|
`, [batch.map(row => row.vendor)]);
|
||||||
SELECT
|
|
||||||
mo.vendor,
|
|
||||||
mo.year,
|
|
||||||
mo.month,
|
|
||||||
COALESCE(mp.total_po, 0) as total_orders,
|
|
||||||
COALESCE(mp.late_orders, 0) as late_orders,
|
|
||||||
COALESCE(mp.avg_lead_time_days, 0) as avg_lead_time_days,
|
|
||||||
COALESCE(mp.total_purchase_value, 0) as total_purchase_value,
|
|
||||||
mo.total_revenue,
|
|
||||||
CASE
|
|
||||||
WHEN mo.total_revenue > 0
|
|
||||||
THEN (mo.total_margin / mo.total_revenue) * 100
|
|
||||||
ELSE 0
|
|
||||||
END as avg_margin_percent
|
|
||||||
FROM monthly_orders mo
|
|
||||||
LEFT JOIN monthly_po mp ON mo.vendor = mp.vendor
|
|
||||||
AND mo.year = mp.year
|
|
||||||
AND mo.month = mp.month
|
|
||||||
UNION
|
|
||||||
SELECT
|
|
||||||
mp.vendor,
|
|
||||||
mp.year,
|
|
||||||
mp.month,
|
|
||||||
mp.total_po as total_orders,
|
|
||||||
mp.late_orders,
|
|
||||||
mp.avg_lead_time_days,
|
|
||||||
mp.total_purchase_value,
|
|
||||||
0 as total_revenue,
|
|
||||||
0 as avg_margin_percent
|
|
||||||
FROM monthly_po mp
|
|
||||||
LEFT JOIN monthly_orders mo ON mp.vendor = mo.vendor
|
|
||||||
AND mp.year = mo.year
|
|
||||||
AND mp.month = mo.month
|
|
||||||
WHERE mo.vendor IS NULL
|
|
||||||
ON DUPLICATE KEY UPDATE
|
|
||||||
total_orders = VALUES(total_orders),
|
|
||||||
late_orders = VALUES(late_orders),
|
|
||||||
avg_lead_time_days = VALUES(avg_lead_time_days),
|
|
||||||
total_purchase_value = VALUES(total_purchase_value),
|
|
||||||
total_revenue = VALUES(total_revenue),
|
|
||||||
avg_margin_percent = VALUES(avg_margin_percent)
|
|
||||||
`);
|
|
||||||
|
|
||||||
processedCount = Math.floor(totalProducts * 0.95);
|
// Clean up temp tables
|
||||||
outputProgress({
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_purchase_stats');
|
||||||
status: 'running',
|
await connection.query('DROP TEMPORARY TABLE IF EXISTS temp_product_stats');
|
||||||
operation: 'Time-based vendor metrics calculated',
|
|
||||||
current: processedCount,
|
lastVendor = batch[batch.length - 1].vendor;
|
||||||
total: totalProducts,
|
processedVendors += batch.length; // Increment processed *vendors*
|
||||||
elapsed: formatElapsedTime(startTime),
|
|
||||||
remaining: estimateRemaining(startTime, processedCount, totalProducts),
|
outputProgress({
|
||||||
rate: calculateRate(startTime, processedCount),
|
status: 'running',
|
||||||
percentage: ((processedCount / totalProducts) * 100).toFixed(1),
|
operation: 'Processing vendor metrics batch',
|
||||||
timing: {
|
current: processedCount + processedVendors, // Use cumulative vendor count
|
||||||
start_time: new Date(startTime).toISOString(),
|
total: totalVendors, // Report total *vendors*
|
||||||
end_time: new Date().toISOString(),
|
elapsed: formatElapsedTime(startTime),
|
||||||
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
remaining: estimateRemaining(startTime, processedCount + processedVendors, totalVendors),
|
||||||
}
|
rate: calculateRate(startTime, processedCount + processedVendors),
|
||||||
});
|
percentage: (((processedCount + processedVendors) / totalVendors) * 100).toFixed(1), // Base on vendors
|
||||||
|
timing: {
|
||||||
|
start_time: new Date(startTime).toISOString(),
|
||||||
|
end_time: new Date().toISOString(),
|
||||||
|
elapsed_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
// If we get here, everything completed successfully
|
// If we get here, everything completed successfully
|
||||||
success = true;
|
success = true;
|
||||||
|
|
||||||
// Update calculate_status
|
// Update calculate_status
|
||||||
await connection.query(`
|
await connection.query(`
|
||||||
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
INSERT INTO calculate_status (module_name, last_calculation_timestamp)
|
||||||
@@ -348,9 +285,9 @@ async function calculateVendorMetrics(startTime, totalProducts, processedCount =
|
|||||||
`);
|
`);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
processedProducts: processedCount,
|
processedProducts: 0, // No products directly processed
|
||||||
processedOrders,
|
processedOrders: 0,
|
||||||
processedPurchaseOrders,
|
processedPurchaseOrders: 0,
|
||||||
success
|
success
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -156,7 +156,7 @@ async function resetDatabase() {
|
|||||||
SELECT GROUP_CONCAT(table_name) as tables
|
SELECT GROUP_CONCAT(table_name) as tables
|
||||||
FROM information_schema.tables
|
FROM information_schema.tables
|
||||||
WHERE table_schema = DATABASE()
|
WHERE table_schema = DATABASE()
|
||||||
AND table_name NOT IN ('users', 'import_history', 'calculate_history')
|
AND table_name NOT IN ('users', 'import_history')
|
||||||
`);
|
`);
|
||||||
|
|
||||||
if (!tables[0].tables) {
|
if (!tables[0].tables) {
|
||||||
@@ -175,7 +175,7 @@ async function resetDatabase() {
|
|||||||
DROP TABLE IF EXISTS
|
DROP TABLE IF EXISTS
|
||||||
${tables[0].tables
|
${tables[0].tables
|
||||||
.split(',')
|
.split(',')
|
||||||
.filter(table => !['users', 'calculate_history'].includes(table))
|
.filter(table => table !== 'users')
|
||||||
.map(table => '`' + table + '`')
|
.map(table => '`' + table + '`')
|
||||||
.join(', ')}
|
.join(', ')}
|
||||||
`;
|
`;
|
||||||
@@ -543,15 +543,5 @@ async function resetDatabase() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Export if required as a module
|
// Run the reset
|
||||||
if (typeof module !== 'undefined' && module.exports) {
|
resetDatabase();
|
||||||
module.exports = resetDatabase;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run if called directly
|
|
||||||
if (require.main === module) {
|
|
||||||
resetDatabase().catch(error => {
|
|
||||||
console.error('Error:', error);
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
@@ -2,7 +2,6 @@ const express = require('express');
|
|||||||
const router = express.Router();
|
const router = express.Router();
|
||||||
const { spawn } = require('child_process');
|
const { spawn } = require('child_process');
|
||||||
const path = require('path');
|
const path = require('path');
|
||||||
const db = require('../utils/db');
|
|
||||||
|
|
||||||
// Debug middleware MUST be first
|
// Debug middleware MUST be first
|
||||||
router.use((req, res, next) => {
|
router.use((req, res, next) => {
|
||||||
@@ -10,11 +9,9 @@ router.use((req, res, next) => {
|
|||||||
next();
|
next();
|
||||||
});
|
});
|
||||||
|
|
||||||
// Store active processes and their progress
|
// Store active import process and its progress
|
||||||
let activeImport = null;
|
let activeImport = null;
|
||||||
let importProgress = null;
|
let importProgress = null;
|
||||||
let activeFullUpdate = null;
|
|
||||||
let activeFullReset = null;
|
|
||||||
|
|
||||||
// SSE clients for progress updates
|
// SSE clients for progress updates
|
||||||
const updateClients = new Set();
|
const updateClients = new Set();
|
||||||
@@ -22,16 +19,17 @@ const importClients = new Set();
|
|||||||
const resetClients = new Set();
|
const resetClients = new Set();
|
||||||
const resetMetricsClients = new Set();
|
const resetMetricsClients = new Set();
|
||||||
const calculateMetricsClients = new Set();
|
const calculateMetricsClients = new Set();
|
||||||
const fullUpdateClients = new Set();
|
|
||||||
const fullResetClients = new Set();
|
|
||||||
|
|
||||||
// Helper to send progress to specific clients
|
// Helper to send progress to specific clients
|
||||||
function sendProgressToClients(clients, data) {
|
function sendProgressToClients(clients, progress) {
|
||||||
// If data is a string, send it directly
|
const data = typeof progress === 'string' ? { progress } : progress;
|
||||||
// If it's an object, convert it to JSON
|
|
||||||
const message = typeof data === 'string'
|
// Ensure we have a status field
|
||||||
? `data: ${data}\n\n`
|
if (!data.status) {
|
||||||
: `data: ${JSON.stringify(data)}\n\n`;
|
data.status = 'running';
|
||||||
|
}
|
||||||
|
|
||||||
|
const message = `data: ${JSON.stringify(data)}\n\n`;
|
||||||
|
|
||||||
clients.forEach(client => {
|
clients.forEach(client => {
|
||||||
try {
|
try {
|
||||||
@@ -47,128 +45,8 @@ function sendProgressToClients(clients, data) {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper to run a script and stream progress
|
|
||||||
function runScript(scriptPath, type, clients) {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
// Kill any existing process of this type
|
|
||||||
let activeProcess;
|
|
||||||
switch (type) {
|
|
||||||
case 'update':
|
|
||||||
if (activeFullUpdate) {
|
|
||||||
try { activeFullUpdate.kill(); } catch (e) { }
|
|
||||||
}
|
|
||||||
activeProcess = activeFullUpdate;
|
|
||||||
break;
|
|
||||||
case 'reset':
|
|
||||||
if (activeFullReset) {
|
|
||||||
try { activeFullReset.kill(); } catch (e) { }
|
|
||||||
}
|
|
||||||
activeProcess = activeFullReset;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
const child = spawn('node', [scriptPath], {
|
|
||||||
stdio: ['inherit', 'pipe', 'pipe']
|
|
||||||
});
|
|
||||||
|
|
||||||
switch (type) {
|
|
||||||
case 'update':
|
|
||||||
activeFullUpdate = child;
|
|
||||||
break;
|
|
||||||
case 'reset':
|
|
||||||
activeFullReset = child;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
let output = '';
|
|
||||||
|
|
||||||
child.stdout.on('data', (data) => {
|
|
||||||
const text = data.toString();
|
|
||||||
output += text;
|
|
||||||
|
|
||||||
// Split by lines to handle multiple JSON outputs
|
|
||||||
const lines = text.split('\n');
|
|
||||||
lines.filter(line => line.trim()).forEach(line => {
|
|
||||||
try {
|
|
||||||
// Try to parse as JSON but don't let it affect the display
|
|
||||||
const jsonData = JSON.parse(line);
|
|
||||||
// Only end the process if we get a final status
|
|
||||||
if (jsonData.status === 'complete' || jsonData.status === 'error' || jsonData.status === 'cancelled') {
|
|
||||||
if (jsonData.status === 'complete' && !jsonData.operation?.includes('complete')) {
|
|
||||||
// Don't close for intermediate completion messages
|
|
||||||
sendProgressToClients(clients, line);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
// Close only on final completion/error/cancellation
|
|
||||||
switch (type) {
|
|
||||||
case 'update':
|
|
||||||
activeFullUpdate = null;
|
|
||||||
break;
|
|
||||||
case 'reset':
|
|
||||||
activeFullReset = null;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
if (jsonData.status === 'error') {
|
|
||||||
reject(new Error(jsonData.error || 'Unknown error'));
|
|
||||||
} else {
|
|
||||||
resolve({ output });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
// Not JSON, just display as is
|
|
||||||
}
|
|
||||||
// Always send the raw line
|
|
||||||
sendProgressToClients(clients, line);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
child.stderr.on('data', (data) => {
|
|
||||||
const text = data.toString();
|
|
||||||
console.error(text);
|
|
||||||
// Send stderr output directly too
|
|
||||||
sendProgressToClients(clients, text);
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on('close', (code) => {
|
|
||||||
switch (type) {
|
|
||||||
case 'update':
|
|
||||||
activeFullUpdate = null;
|
|
||||||
break;
|
|
||||||
case 'reset':
|
|
||||||
activeFullReset = null;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (code !== 0) {
|
|
||||||
const error = `Script ${scriptPath} exited with code ${code}`;
|
|
||||||
sendProgressToClients(clients, error);
|
|
||||||
reject(new Error(error));
|
|
||||||
}
|
|
||||||
// Don't resolve here - let the completion message from the script trigger the resolve
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on('error', (err) => {
|
|
||||||
switch (type) {
|
|
||||||
case 'update':
|
|
||||||
activeFullUpdate = null;
|
|
||||||
break;
|
|
||||||
case 'reset':
|
|
||||||
activeFullReset = null;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
sendProgressToClients(clients, err.message);
|
|
||||||
reject(err);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Progress endpoints
|
// Progress endpoints
|
||||||
router.get('/:type/progress', (req, res) => {
|
router.get('/update/progress', (req, res) => {
|
||||||
const { type } = req.params;
|
|
||||||
if (!['update', 'reset'].includes(type)) {
|
|
||||||
return res.status(400).json({ error: 'Invalid operation type' });
|
|
||||||
}
|
|
||||||
|
|
||||||
res.writeHead(200, {
|
res.writeHead(200, {
|
||||||
'Content-Type': 'text/event-stream',
|
'Content-Type': 'text/event-stream',
|
||||||
'Cache-Control': 'no-cache',
|
'Cache-Control': 'no-cache',
|
||||||
@@ -177,19 +55,105 @@ router.get('/:type/progress', (req, res) => {
|
|||||||
'Access-Control-Allow-Credentials': 'true'
|
'Access-Control-Allow-Credentials': 'true'
|
||||||
});
|
});
|
||||||
|
|
||||||
// Add this client to the correct set
|
// Send an initial message to test the connection
|
||||||
const clients = type === 'update' ? fullUpdateClients : fullResetClients;
|
res.write('data: {"status":"running","operation":"Initializing connection..."}\n\n');
|
||||||
clients.add(res);
|
|
||||||
|
|
||||||
// Send initial connection message
|
// Add this client to the update set
|
||||||
sendProgressToClients(new Set([res]), JSON.stringify({
|
updateClients.add(res);
|
||||||
status: 'running',
|
|
||||||
operation: 'Initializing connection...'
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Handle client disconnect
|
// Remove client when connection closes
|
||||||
req.on('close', () => {
|
req.on('close', () => {
|
||||||
clients.delete(res);
|
updateClients.delete(res);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
router.get('/import/progress', (req, res) => {
|
||||||
|
res.writeHead(200, {
|
||||||
|
'Content-Type': 'text/event-stream',
|
||||||
|
'Cache-Control': 'no-cache',
|
||||||
|
'Connection': 'keep-alive',
|
||||||
|
'Access-Control-Allow-Origin': req.headers.origin || '*',
|
||||||
|
'Access-Control-Allow-Credentials': 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Send an initial message to test the connection
|
||||||
|
res.write('data: {"status":"running","operation":"Initializing connection..."}\n\n');
|
||||||
|
|
||||||
|
// Add this client to the import set
|
||||||
|
importClients.add(res);
|
||||||
|
|
||||||
|
// Remove client when connection closes
|
||||||
|
req.on('close', () => {
|
||||||
|
importClients.delete(res);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
router.get('/reset/progress', (req, res) => {
|
||||||
|
res.writeHead(200, {
|
||||||
|
'Content-Type': 'text/event-stream',
|
||||||
|
'Cache-Control': 'no-cache',
|
||||||
|
'Connection': 'keep-alive',
|
||||||
|
'Access-Control-Allow-Origin': req.headers.origin || '*',
|
||||||
|
'Access-Control-Allow-Credentials': 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Send an initial message to test the connection
|
||||||
|
res.write('data: {"status":"running","operation":"Initializing connection..."}\n\n');
|
||||||
|
|
||||||
|
// Add this client to the reset set
|
||||||
|
resetClients.add(res);
|
||||||
|
|
||||||
|
// Remove client when connection closes
|
||||||
|
req.on('close', () => {
|
||||||
|
resetClients.delete(res);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add reset-metrics progress endpoint
|
||||||
|
router.get('/reset-metrics/progress', (req, res) => {
|
||||||
|
res.writeHead(200, {
|
||||||
|
'Content-Type': 'text/event-stream',
|
||||||
|
'Cache-Control': 'no-cache',
|
||||||
|
'Connection': 'keep-alive',
|
||||||
|
'Access-Control-Allow-Origin': req.headers.origin || '*',
|
||||||
|
'Access-Control-Allow-Credentials': 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Send an initial message to test the connection
|
||||||
|
res.write('data: {"status":"running","operation":"Initializing connection..."}\n\n');
|
||||||
|
|
||||||
|
// Add this client to the reset-metrics set
|
||||||
|
resetMetricsClients.add(res);
|
||||||
|
|
||||||
|
// Remove client when connection closes
|
||||||
|
req.on('close', () => {
|
||||||
|
resetMetricsClients.delete(res);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add calculate-metrics progress endpoint
|
||||||
|
router.get('/calculate-metrics/progress', (req, res) => {
|
||||||
|
res.writeHead(200, {
|
||||||
|
'Content-Type': 'text/event-stream',
|
||||||
|
'Cache-Control': 'no-cache',
|
||||||
|
'Connection': 'keep-alive',
|
||||||
|
'Access-Control-Allow-Origin': req.headers.origin || '*',
|
||||||
|
'Access-Control-Allow-Credentials': 'true'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Send current progress if it exists
|
||||||
|
if (importProgress) {
|
||||||
|
res.write(`data: ${JSON.stringify(importProgress)}\n\n`);
|
||||||
|
} else {
|
||||||
|
res.write('data: {"status":"running","operation":"Initializing connection..."}\n\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add this client to the calculate-metrics set
|
||||||
|
calculateMetricsClients.add(res);
|
||||||
|
|
||||||
|
// Remove client when connection closes
|
||||||
|
req.on('close', () => {
|
||||||
|
calculateMetricsClients.delete(res);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -210,6 +174,7 @@ router.get('/status', (req, res) => {
|
|||||||
|
|
||||||
// Add calculate-metrics status endpoint
|
// Add calculate-metrics status endpoint
|
||||||
router.get('/calculate-metrics/status', (req, res) => {
|
router.get('/calculate-metrics/status', (req, res) => {
|
||||||
|
console.log('Calculate metrics status endpoint hit');
|
||||||
const calculateMetrics = require('../../scripts/calculate-metrics');
|
const calculateMetrics = require('../../scripts/calculate-metrics');
|
||||||
const progress = calculateMetrics.getProgress();
|
const progress = calculateMetrics.getProgress();
|
||||||
|
|
||||||
@@ -406,35 +371,49 @@ router.post('/import', async (req, res) => {
|
|||||||
|
|
||||||
// Route to cancel active process
|
// Route to cancel active process
|
||||||
router.post('/cancel', (req, res) => {
|
router.post('/cancel', (req, res) => {
|
||||||
let killed = false;
|
if (!activeImport) {
|
||||||
|
return res.status(404).json({ error: 'No active process to cancel' });
|
||||||
// Get the operation type from the request
|
|
||||||
const { type } = req.query;
|
|
||||||
const clients = type === 'update' ? fullUpdateClients : fullResetClients;
|
|
||||||
const activeProcess = type === 'update' ? activeFullUpdate : activeFullReset;
|
|
||||||
|
|
||||||
if (activeProcess) {
|
|
||||||
try {
|
|
||||||
activeProcess.kill('SIGTERM');
|
|
||||||
if (type === 'update') {
|
|
||||||
activeFullUpdate = null;
|
|
||||||
} else {
|
|
||||||
activeFullReset = null;
|
|
||||||
}
|
|
||||||
killed = true;
|
|
||||||
sendProgressToClients(clients, JSON.stringify({
|
|
||||||
status: 'cancelled',
|
|
||||||
operation: 'Operation cancelled'
|
|
||||||
}));
|
|
||||||
} catch (err) {
|
|
||||||
console.error(`Error killing ${type} process:`, err);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (killed) {
|
try {
|
||||||
|
// If it's the prod import module, call its cancel function
|
||||||
|
if (typeof activeImport.cancelImport === 'function') {
|
||||||
|
activeImport.cancelImport();
|
||||||
|
} else {
|
||||||
|
// Otherwise it's a child process
|
||||||
|
activeImport.kill('SIGTERM');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the operation type from the request
|
||||||
|
const { operation } = req.query;
|
||||||
|
|
||||||
|
// Send cancel message only to the appropriate client set
|
||||||
|
const cancelMessage = {
|
||||||
|
status: 'cancelled',
|
||||||
|
operation: 'Operation cancelled'
|
||||||
|
};
|
||||||
|
|
||||||
|
switch (operation) {
|
||||||
|
case 'update':
|
||||||
|
sendProgressToClients(updateClients, cancelMessage);
|
||||||
|
break;
|
||||||
|
case 'import':
|
||||||
|
sendProgressToClients(importClients, cancelMessage);
|
||||||
|
break;
|
||||||
|
case 'reset':
|
||||||
|
sendProgressToClients(resetClients, cancelMessage);
|
||||||
|
break;
|
||||||
|
case 'calculate-metrics':
|
||||||
|
sendProgressToClients(calculateMetricsClients, cancelMessage);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
res.json({ success: true });
|
res.json({ success: true });
|
||||||
} else {
|
} catch (error) {
|
||||||
res.status(404).json({ error: 'No active process to cancel' });
|
// Even if there's an error, try to clean up
|
||||||
|
activeImport = null;
|
||||||
|
importProgress = null;
|
||||||
|
res.status(500).json({ error: 'Failed to cancel process' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -573,6 +552,20 @@ router.post('/reset-metrics', async (req, res) => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Add calculate-metrics status endpoint
|
||||||
|
router.get('/calculate-metrics/status', (req, res) => {
|
||||||
|
const calculateMetrics = require('../../scripts/calculate-metrics');
|
||||||
|
const progress = calculateMetrics.getProgress();
|
||||||
|
|
||||||
|
// Only consider it active if both the process is running and we have progress
|
||||||
|
const isActive = !!activeImport && !!progress;
|
||||||
|
|
||||||
|
res.json({
|
||||||
|
active: isActive,
|
||||||
|
progress: isActive ? progress : null
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
// Add calculate-metrics endpoint
|
// Add calculate-metrics endpoint
|
||||||
router.post('/calculate-metrics', async (req, res) => {
|
router.post('/calculate-metrics', async (req, res) => {
|
||||||
if (activeImport) {
|
if (activeImport) {
|
||||||
@@ -718,96 +711,4 @@ router.post('/import-from-prod', async (req, res) => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// POST /csv/full-update - Run full update script
|
|
||||||
router.post('/full-update', async (req, res) => {
|
|
||||||
try {
|
|
||||||
const scriptPath = path.join(__dirname, '../../scripts/full-update.js');
|
|
||||||
runScript(scriptPath, 'update', fullUpdateClients)
|
|
||||||
.catch(error => {
|
|
||||||
console.error('Update failed:', error);
|
|
||||||
});
|
|
||||||
res.status(202).json({ message: 'Update started' });
|
|
||||||
} catch (error) {
|
|
||||||
res.status(500).json({ error: error.message });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// POST /csv/full-reset - Run full reset script
|
|
||||||
router.post('/full-reset', async (req, res) => {
|
|
||||||
try {
|
|
||||||
const scriptPath = path.join(__dirname, '../../scripts/full-reset.js');
|
|
||||||
runScript(scriptPath, 'reset', fullResetClients)
|
|
||||||
.catch(error => {
|
|
||||||
console.error('Reset failed:', error);
|
|
||||||
});
|
|
||||||
res.status(202).json({ message: 'Reset started' });
|
|
||||||
} catch (error) {
|
|
||||||
res.status(500).json({ error: error.message });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// GET /history/import - Get recent import history
|
|
||||||
router.get('/history/import', async (req, res) => {
|
|
||||||
try {
|
|
||||||
const pool = req.app.locals.pool;
|
|
||||||
const [rows] = await pool.query(`
|
|
||||||
SELECT * FROM import_history
|
|
||||||
ORDER BY start_time DESC
|
|
||||||
LIMIT 20
|
|
||||||
`);
|
|
||||||
res.json(rows || []);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Error fetching import history:', error);
|
|
||||||
res.status(500).json({ error: error.message });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// GET /history/calculate - Get recent calculation history
|
|
||||||
router.get('/history/calculate', async (req, res) => {
|
|
||||||
try {
|
|
||||||
const pool = req.app.locals.pool;
|
|
||||||
const [rows] = await pool.query(`
|
|
||||||
SELECT * FROM calculate_history
|
|
||||||
ORDER BY start_time DESC
|
|
||||||
LIMIT 20
|
|
||||||
`);
|
|
||||||
res.json(rows || []);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Error fetching calculate history:', error);
|
|
||||||
res.status(500).json({ error: error.message });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// GET /status/modules - Get module calculation status
|
|
||||||
router.get('/status/modules', async (req, res) => {
|
|
||||||
try {
|
|
||||||
const pool = req.app.locals.pool;
|
|
||||||
const [rows] = await pool.query(`
|
|
||||||
SELECT module_name, last_calculation_timestamp
|
|
||||||
FROM calculate_status
|
|
||||||
ORDER BY module_name
|
|
||||||
`);
|
|
||||||
res.json(rows || []);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Error fetching module status:', error);
|
|
||||||
res.status(500).json({ error: error.message });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// GET /status/tables - Get table sync status
|
|
||||||
router.get('/status/tables', async (req, res) => {
|
|
||||||
try {
|
|
||||||
const pool = req.app.locals.pool;
|
|
||||||
const [rows] = await pool.query(`
|
|
||||||
SELECT table_name, last_sync_timestamp
|
|
||||||
FROM sync_status
|
|
||||||
ORDER BY table_name
|
|
||||||
`);
|
|
||||||
res.json(rows || []);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Error fetching table status:', error);
|
|
||||||
res.status(500).json({ error: error.message });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
module.exports = router;
|
module.exports = router;
|
||||||
@@ -4,23 +4,29 @@ import { ScrollArea } from "@/components/ui/scroll-area"
|
|||||||
import { Table, TableBody, TableCell, TableHead, TableHeader, TableRow } from "@/components/ui/table"
|
import { Table, TableBody, TableCell, TableHead, TableHeader, TableRow } from "@/components/ui/table"
|
||||||
import config from "@/config"
|
import config from "@/config"
|
||||||
|
|
||||||
interface Product {
|
interface ReplenishmentMetricsData {
|
||||||
pid: number;
|
productsToReplenish: number;
|
||||||
sku: string;
|
unitsToReplenish: number;
|
||||||
title: string;
|
replenishmentCost: number;
|
||||||
stock_quantity: number;
|
replenishmentRetail: number;
|
||||||
daily_sales_avg: string;
|
topVariants: {
|
||||||
reorder_qty: number;
|
id: number;
|
||||||
last_purchase_date: string | null;
|
title: string;
|
||||||
|
currentStock: number;
|
||||||
|
replenishQty: number;
|
||||||
|
replenishCost: number;
|
||||||
|
replenishRetail: number;
|
||||||
|
status: string;
|
||||||
|
}[];
|
||||||
}
|
}
|
||||||
|
|
||||||
export function TopReplenishProducts() {
|
export function TopReplenishProducts() {
|
||||||
const { data } = useQuery<Product[]>({
|
const { data } = useQuery<ReplenishmentMetricsData>({
|
||||||
queryKey: ["top-replenish-products"],
|
queryKey: ["replenishment-metrics"],
|
||||||
queryFn: async () => {
|
queryFn: async () => {
|
||||||
const response = await fetch(`${config.apiUrl}/dashboard/replenish/products?limit=50`)
|
const response = await fetch(`${config.apiUrl}/dashboard/replenishment/metrics`)
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
throw new Error("Failed to fetch products to replenish")
|
throw new Error("Failed to fetch replenishment metrics")
|
||||||
}
|
}
|
||||||
return response.json()
|
return response.json()
|
||||||
},
|
},
|
||||||
@@ -38,29 +44,28 @@ export function TopReplenishProducts() {
|
|||||||
<TableRow>
|
<TableRow>
|
||||||
<TableHead>Product</TableHead>
|
<TableHead>Product</TableHead>
|
||||||
<TableHead className="text-right">Stock</TableHead>
|
<TableHead className="text-right">Stock</TableHead>
|
||||||
<TableHead className="text-right">Daily Sales</TableHead>
|
|
||||||
<TableHead className="text-right">Reorder Qty</TableHead>
|
<TableHead className="text-right">Reorder Qty</TableHead>
|
||||||
<TableHead>Last Purchase</TableHead>
|
<TableHead className="text-right">Cost</TableHead>
|
||||||
|
<TableHead>Status</TableHead>
|
||||||
</TableRow>
|
</TableRow>
|
||||||
</TableHeader>
|
</TableHeader>
|
||||||
<TableBody>
|
<TableBody>
|
||||||
{data?.map((product) => (
|
{data?.topVariants?.map((product) => (
|
||||||
<TableRow key={product.pid}>
|
<TableRow key={product.id}>
|
||||||
<TableCell>
|
<TableCell>
|
||||||
<a
|
<a
|
||||||
href={`https://backend.acherryontop.com/product/${product.pid}`}
|
href={`https://backend.acherryontop.com/product/${product.id}`}
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
className="hover:underline"
|
className="hover:underline"
|
||||||
>
|
>
|
||||||
{product.title}
|
{product.title}
|
||||||
</a>
|
</a>
|
||||||
<div className="text-sm text-muted-foreground">{product.sku}</div>
|
|
||||||
</TableCell>
|
</TableCell>
|
||||||
<TableCell className="text-right">{product.stock_quantity}</TableCell>
|
<TableCell className="text-right">{product.currentStock}</TableCell>
|
||||||
<TableCell className="text-right">{Number(product.daily_sales_avg).toFixed(1)}</TableCell>
|
<TableCell className="text-right">{product.replenishQty}</TableCell>
|
||||||
<TableCell className="text-right">{product.reorder_qty}</TableCell>
|
<TableCell className="text-right">${product.replenishCost.toFixed(2)}</TableCell>
|
||||||
<TableCell>{product.last_purchase_date ? product.last_purchase_date : '-'}</TableCell>
|
<TableCell>{product.status}</TableCell>
|
||||||
</TableRow>
|
</TableRow>
|
||||||
))}
|
))}
|
||||||
</TableBody>
|
</TableBody>
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -133,10 +133,6 @@ export function PerformanceMetrics() {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
function getCategoryName(_cat_id: number): import("react").ReactNode {
|
|
||||||
throw new Error('Function not implemented.');
|
|
||||||
}
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="max-w-[700px] space-y-4">
|
<div className="max-w-[700px] space-y-4">
|
||||||
{/* Lead Time Thresholds Card */}
|
{/* Lead Time Thresholds Card */}
|
||||||
@@ -209,11 +205,11 @@ export function PerformanceMetrics() {
|
|||||||
<Table>
|
<Table>
|
||||||
<TableHeader>
|
<TableHeader>
|
||||||
<TableRow>
|
<TableRow>
|
||||||
<TableCell>Category</TableCell>
|
<TableHead>Category</TableHead>
|
||||||
<TableCell>Vendor</TableCell>
|
<TableHead>Vendor</TableHead>
|
||||||
<TableCell className="text-right">A Threshold</TableCell>
|
<TableHead className="text-right">A Threshold</TableHead>
|
||||||
<TableCell className="text-right">B Threshold</TableCell>
|
<TableHead className="text-right">B Threshold</TableHead>
|
||||||
<TableCell className="text-right">Period Days</TableCell>
|
<TableHead className="text-right">Period Days</TableHead>
|
||||||
</TableRow>
|
</TableRow>
|
||||||
</TableHeader>
|
</TableHeader>
|
||||||
<TableBody>
|
<TableBody>
|
||||||
@@ -246,10 +242,10 @@ export function PerformanceMetrics() {
|
|||||||
<Table>
|
<Table>
|
||||||
<TableHeader>
|
<TableHeader>
|
||||||
<TableRow>
|
<TableRow>
|
||||||
<TableCell>Category</TableCell>
|
<TableHead>Category</TableHead>
|
||||||
<TableCell>Vendor</TableCell>
|
<TableHead>Vendor</TableHead>
|
||||||
<TableCell className="text-right">Period Days</TableCell>
|
<TableHead className="text-right">Period Days</TableHead>
|
||||||
<TableCell className="text-right">Target Rate</TableCell>
|
<TableHead className="text-right">Target Rate</TableHead>
|
||||||
</TableRow>
|
</TableRow>
|
||||||
</TableHeader>
|
</TableHeader>
|
||||||
<TableBody>
|
<TableBody>
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ import { Input } from "@/components/ui/input";
|
|||||||
import { Label } from "@/components/ui/label";
|
import { Label } from "@/components/ui/label";
|
||||||
import { toast } from "sonner";
|
import { toast } from "sonner";
|
||||||
import config from '../../config';
|
import config from '../../config';
|
||||||
|
import { Table, TableBody, TableCell, TableHeader, TableRow } from "@/components/ui/table";
|
||||||
|
|
||||||
interface StockThreshold {
|
interface StockThreshold {
|
||||||
id: number;
|
id: number;
|
||||||
@@ -243,6 +244,54 @@ export function StockManagement() {
|
|||||||
</div>
|
</div>
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
|
|
||||||
|
<Table>
|
||||||
|
<TableHeader>
|
||||||
|
<TableRow>
|
||||||
|
<TableHead>Category</TableHead>
|
||||||
|
<TableHead>Vendor</TableHead>
|
||||||
|
<TableHead className="text-right">Critical Days</TableHead>
|
||||||
|
<TableHead className="text-right">Reorder Days</TableHead>
|
||||||
|
<TableHead className="text-right">Overstock Days</TableHead>
|
||||||
|
<TableHead className="text-right">Low Stock</TableHead>
|
||||||
|
<TableHead className="text-right">Min Reorder</TableHead>
|
||||||
|
</TableRow>
|
||||||
|
</TableHeader>
|
||||||
|
<TableBody>
|
||||||
|
{stockThresholds.map((threshold) => (
|
||||||
|
<TableRow key={`${threshold.cat_id}-${threshold.vendor}`}>
|
||||||
|
<TableCell>{threshold.cat_id ? getCategoryName(threshold.cat_id) : 'Global'}</TableCell>
|
||||||
|
<TableCell>{threshold.vendor || 'All Vendors'}</TableCell>
|
||||||
|
<TableCell className="text-right">{threshold.critical_days}</TableCell>
|
||||||
|
<TableCell className="text-right">{threshold.reorder_days}</TableCell>
|
||||||
|
<TableCell className="text-right">{threshold.overstock_days}</TableCell>
|
||||||
|
<TableCell className="text-right">{threshold.low_stock_threshold}</TableCell>
|
||||||
|
<TableCell className="text-right">{threshold.min_reorder_quantity}</TableCell>
|
||||||
|
</TableRow>
|
||||||
|
))}
|
||||||
|
</TableBody>
|
||||||
|
</Table>
|
||||||
|
|
||||||
|
<Table>
|
||||||
|
<TableHeader>
|
||||||
|
<TableRow>
|
||||||
|
<TableHead>Category</TableHead>
|
||||||
|
<TableHead>Vendor</TableHead>
|
||||||
|
<TableHead className="text-right">Coverage Days</TableHead>
|
||||||
|
<TableHead className="text-right">Service Level</TableHead>
|
||||||
|
</TableRow>
|
||||||
|
</TableHeader>
|
||||||
|
<TableBody>
|
||||||
|
{safetyStockConfigs.map((config) => (
|
||||||
|
<TableRow key={`${config.cat_id}-${config.vendor}`}>
|
||||||
|
<TableCell>{config.cat_id ? getCategoryName(config.cat_id) : 'Global'}</TableCell>
|
||||||
|
<TableCell>{config.vendor || 'All Vendors'}</TableCell>
|
||||||
|
<TableCell className="text-right">{config.coverage_days}</TableCell>
|
||||||
|
<TableCell className="text-right">{config.service_level}%</TableCell>
|
||||||
|
</TableRow>
|
||||||
|
))}
|
||||||
|
</TableBody>
|
||||||
|
</Table>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
#!/bin/zsh
|
#!/bin/zsh
|
||||||
|
|
||||||
#Clear previous mount in case it’s still there
|
#Clear previous mount in case it’s still there
|
||||||
umount /Users/matt/Library/Mobile Documents/com~apple~CloudDocs/Dev/inventory/inventory-server
|
umount ~/Dev/inventory/inventory-server
|
||||||
|
|
||||||
#Mount
|
#Mount
|
||||||
sshfs matt@dashboard.kent.pw:/var/www/html/inventory -p 22122 /Users/matt/Library/Mobile Documents/com~apple~CloudDocs/Dev/inventory/inventory-server/
|
sshfs matt@dashboard.kent.pw:/var/www/html/inventory -p 22122 ~/Dev/inventory/inventory-server/
|
||||||
Reference in New Issue
Block a user