Overslaan naar inhoud
CTech Digital
  • Startpagina
  • Odoo services
  • AI services
  • Contact
  • 0
  • Nederlands (BE) English (UK) Français
CTech Digital
  • 0
    • Startpagina
    • Odoo services
    • AI services
    • Contact
  • Nederlands (BE) English (UK) Français

Converting python to c++

  • Alle blogs
  • Daily blog
  • Converting python to c++
  • 2 juli 2025 in
    CTech Metrology, Luc Wens


    Introduction

    Jesse's code consists of 2 main functions

    1. C:\XAI\python\5GAlert\speedtrack.py : calculates the speed i guess
    2. C:\XAI\python\5GAlert\FINAL.py : main detection program

    This is Python, goal is to get these routines integrated into the 5G Alert object detector, makes it more performant.

    In these days we upload and ask Gemini to convert everything to c++

    Gemini's response

    Of course. Converting these advanced Python scripts to C++ is a significant undertaking, as it involves moving from a dynamically-typed language with a rich package ecosystem to a statically-typed language requiring manual memory management and more complex build setups.

    The Python scripts FINAL.py and speedtrack.py create a sophisticated LiDAR object detection and tracking pipeline. A C++ version offers potential performance gains and easier integration into existing C++-based systems.

    Here is a comprehensive, structured C++ version of the project.

    1. Project Overview and Prerequisites

    This C++ project mirrors the functionality of your Python scripts: it loads proprietary .xpc LiDAR data, performs object detection using a PyTorch model, tracks objects using DeepSORT, calculates their real-world coordinates and velocity, and outputs the results.

    Before you can compile and run this C++ code, you must set up your environment. This is the most critical part of the conversion process.

    A. Export Your Detectron2 Model to TorchScript

    The C++ PyTorch API (LibTorch) cannot run Python code directly. You must first convert your trained model into an intermediate format called TorchScript.

    Create a new Python script (export_model.py) to do this:

    Python

    # export_model.py
    import torch
    from detectron2.config import get_cfg
    from detectron2 import model_zoo
    from detectron2.engine import DefaultPredictor
    
    def export_to_torchscript():
        """
        Exports the Detectron2 model to a TorchScript file for use in C++.
        """
        print("CUDA available:", torch.cuda.is_available())
        
        # Configure the model exactly as in your original script
        cfg = get_cfg()
        cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
        cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.1 # Use the same threshold
        cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")
        cfg.MODEL.DEVICE = "cpu" # Export the model on the CPU for better compatibility
        
        predictor = DefaultPredictor(cfg)
        model = predictor.model
        
        # Set the model to evaluation mode
        model.eval()
    
        # Create a dummy input tensor of the expected size
        # The Ouster LiDAR data has 64 beams (height). The width can vary. Let's assume a typical width like 1024.
        # The model expects a 3-channel image (CHW format).
        dummy_input = torch.randn(3, 64, 1024).to(cfg.MODEL.DEVICE)
    
        # Trace the model
        # Note: Detectron2 models often need to be traced on their 'forward' method
        # and might require a specific input format (a list of dictionaries).
        # This is a simplified tracing attempt; complex models may require wrapper classes.
        try:
            # A common way to package a model for inference
            class ModelWrapper(torch.nn.Module):
                def __init__(self, model):
                    super().__init__()
                    self.model = model
                
                def forward(self, image):
                    # The model expects a list of inputs, each being a dict
                    inputs = [{"image": image}]
                    # The output is also a list of dicts, we extract the 'instances'
                    outputs = self.model(inputs)
                    return outputs[0]['instances']
    
            wrapped_model = ModelWrapper(model)
            traced_script_module = torch.jit.trace(wrapped_model, dummy_input)
            traced_script_module.save("detectron_model.pt")
            print("Model successfully exported to detectron_model.pt")
    
        except Exception as e:
            print(f"Error during model tracing: {e}")
            print("Model export failed. Complex models like Faster R-CNN might require a custom 'forward' method or a different tracing approach.")
    
    if __name__ == "__main__":
        export_to_torchscript()
    

    Run this script to generate detectron_model.pt. This file is what your C++ code will load.

    B. C++ Library Dependencies

    You need to install the following C++ libraries:

    1. LibTorch: The C++ distribution of PyTorch. Download it directly from the PyTorch website.
    2. OpenCV: The standard for computer vision tasks.
    3. Eigen: A high-performance library for linear algebra (a numpy replacement).
    4. Ouster SDK (C++): The official C++ SDK for processing Ouster sensor data.
    5. GPC360 SDK (C++): The proprietary C++ SDK for loading .xpc files. You should have access to this.
    6. PROJ: The C++ library for cartographic projections (a pyproj replacement).
    7. nlohmann/json: A popular header-only library for handling JSON.
    8. DeepSORT C++ Implementation: There is no official C++ version of deep_sort_realtime. You would need to implement the core components (Kalman Filter, Hungarian Algorithm) or find a third-party C++ port. For this conversion, we will stub out the interface.

    2. C++ Project Structure

    Here is a logical way to structure the C++ project files.

    /LidarTracker/
    |-- CMakeLists.txt
    |-- README.md
    |-- main.cpp
    |-- LidarProcessor.hpp
    |-- LidarProcessor.cpp
    |-- TrackedObject.hpp
    |-- DeepSort.hpp         // Interface for a DeepSORT implementation
    |-- data/
    |   |-- xpc/
    |   |   |-- frame00000000.xpc
    |   |   |-- frame00000000.conf
    |   |-- ins/
    |   |   |-- ins_0001.csv
    |-- models/
    |   |-- detectron_model.pt
    |-- output_json/
    

    3. C++ Source Code

    CMakeLists.txt

    This file is crucial for telling the compiler how to find your libraries and build the project.

    CMake

    cmake_minimum_required(VERSION 3.15)
    project(LidarTracker)
    
    set(CMAKE_CXX_STANDARD 17)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)
    
    # --- Find Dependencies ---
    # You must set these paths to where you installed the libraries
    set(OpenCV_DIR "/path/to/opencv/build")
    set(Torch_DIR "/path/to/libtorch")
    
    find_package(Torch REQUIRED)
    find_package(OpenCV REQUIRED)
    
    # Add other libraries like Eigen, Ouster SDK, GPC360 SDK, PROJ
    # This is highly dependent on how you installed them.
    # Example for Eigen (often header-only):
    # find_package(Eigen3 REQUIRED)
    # include_directories(${EIGEN3_INCLUDE_DIR})
    
    # Add include directories
    include_directories(${OpenCV_INCLUDE_DIRS})
    # Add GPC360, Ouster, PROJ include paths here...
    
    # --- Create Executable ---
    add_executable(lidar_tracker main.cpp LidarProcessor.cpp)
    
    # --- Link Libraries ---
    target_link_libraries(lidar_tracker
        ${TORCH_LIBRARIES}
        ${OpenCV_LIBS}
        # Add GPC360, Ouster, PROJ library links here...
    )
    
    set_property(TARGET lidar_tracker PROPERTY CXX_STANDARD 17)
    

    TrackedObject.hpp

    A header to define the data structures, similar to the dictionaries in Python.

    C++

    #ifndef TRACKED_OBJECT_HPP
    #define TRACKED_OBJECT_HPP
    
    #include <string>
    #include <vector>
    #include <cstdint>
    #include "json.hpp" // nlohmann/json
    
    // Using nlohmann::json for convenience
    using json = nlohmann::json;
    
    struct Vector3D {
        double x = 0.0, y = 0.0, z = 0.0;
        json to_json() const { return {{"x", x}, {"y", y}, {"z", z}}; }
    };
    
    struct Quaternion {
        double qw = 0.0, qx = 0.0, qy = 0.0, qz = 0.0;
        json to_json() const { return {{"qw", qw}, {"qx", qx}, {"qy", qy}, {"qz", qz}}; }
    };
    
    struct TrackedObject {
        int64_t id;
        std::string uuid;
        std::string classification;
        float classification_confidence;
        int64_t creation_ts;
        int64_t update_ts;
        int frame_count;
        double heading_rad;
        Vector3D heading_vector;
        Vector3D position_llh_deg;
        Vector3D velocity_mps;
    
        json to_json() const {
            json j;
            j["id"] = id;
            j["uuid"] = uuid;
            j["classification"] = classification;
            j["classification_confidence"] = classification_confidence;
            j["creation_ts"] = creation_ts;
            j["update_ts"] = update_ts;
            j["frame_count"] = frame_count;
            j["heading_rad"] = heading_rad;
            j["heading_vector"] = heading_vector.to_json();
            j["position_llh_deg"] = position_llh_deg.to_json();
            j["velocity_mps"] = velocity_mps.to_json();
            // Add other fields as needed
            return j;
        }
    };
    
    #endif // TRACKED_OBJECT_HPP
    

    LidarProcessor.hpp

    Header file for our main processing class.

    C++

    #ifndef LIDAR_PROCESSOR_HPP
    #define LIDAR_PROCESSOR_HPP
    
    #include <string>
    #include <torch/script.h>
    #include <opencv2/opencv.hpp>
    // Include other necessary headers: GPC360, Ouster, PROJ, etc.
    
    class LidarProcessor {
    public:
        LidarProcessor(const std::string& model_path, const std::string& conf_path, const std::string& ins_path);
        void process_file(const std::string& xpc_path, const std::string& output_dir, int start_angle, int end_angle);
    
    private:
        // --- Model and Device ---
        torch::Device device_;
        torch::jit::script::Module module_;
    
        // --- Sensor and Calibration Data ---
        // Ouster and GPC360 SDK objects would go here
        // e.g., ouster::sensor::sensor_info metadata_;
        // e.g., ouster::sensor::XYZLut xyz_lut_;
        
        // --- Coordinate Systems ---
        // PROJ objects for transformation
        // Eigen::Matrix3d R_lidar_to_world_;
        // Eigen::Vector3d lidar_world_pos_;
    
        // --- Background Subtractor ---
        cv::Ptr<cv::BackgroundSubtractorMOG2> bg_subtractor_;
        
        // --- Helper Methods ---
        void load_ins_data(const std::string& ins_path);
        std::vector<std::tuple<cv::Rect, float, int>> process_detections_d2(const torch::IValue& outputs);
    };
    
    #endif // LIDAR_PROCESSOR_HPP
    

    LidarProcessor.cpp

    The core implementation. Note: This contains placeholder code where proprietary SDKs (GPC360, Ouster) and the DeepSORT C++ implementation would be used.

    C++

    #include "LidarProcessor.hpp"
    #include <iostream>
    #include <fstream>
    #include <vector>
    // Include other necessary C++ headers
    
    // Placeholder for proprietary SDKs
    // #include <gpc360_py_sdk.h> 
    // #include <ouster/sdk.h>
    // #include <proj.h>
    
    
    LidarProcessor::LidarProcessor(const std::string& model_path, const std::string& conf_path, const std::string& ins_path)
        : device_(torch::cuda::is_available() ? torch::kCUDA : torch::kCPU) {
        
        // 1. Load TorchScript Model
        try {
            module_ = torch::jit::load(model_path);
            module_.to(device_);
            module_.eval();
            std::cout << "Successfully loaded model to " << (device_.is_cuda() ? "CUDA" : "CPU") << std::endl;
        } catch (const c10::Error& e) {
            std::cerr << "Error loading the model:\n" << e.msg() << std::endl;
            exit(1);
        }
    
        // 2. Initialize Background Subtractor
        bg_subtractor_ = cv2::createBackgroundSubtractorMOG2(50, 4000, false);
        
        // 3. Load LiDAR Metadata (using placeholder logic)
        // metadata_ = ouster::sensor::sensor_info(gpc360_sdk::load_from_file(conf_path).metadata());
        // xyz_lut_ = ouster::sensor::XYZLut(metadata_);
        std::cout << "LiDAR metadata initialized (placeholder)." << std::endl;
    
        // 4. Load INS Data and Initialize Transformers
        load_ins_data(ins_path);
        std::cout << "INS data loaded and transformations initialized (placeholder)." << std::endl;
    }
    
    void LidarProcessor::load_ins_data(const std::string& ins_path) {
        // In C++, you'd use fstream to read the CSV file
        std::ifstream ins_file(ins_path);
        if (!ins_file.is_open()) {
            std::cerr << "Error: INS file not found at " << ins_path << std::endl;
            return;
        }
        // Parse the CSV, get yaw/pitch/roll and lat/lon/alt
        // Initialize PROJ transformers (e.g., proj_create_crs_to_crs)
        // Calculate R_lidar_to_world (rotation matrix) and lidar_world_pos (translation vector)
        // This would involve using Eigen for matrix operations.
    }
    
    void LidarProcessor::process_file(const std::string& xpc_path, const std::string& output_dir, int start_angle, int end_angle) {
        std::cout << "Processing " << xpc_path << "..." << std::endl;
        
        // --- 1. Load Data (Placeholder) ---
        // auto xpc_data = gpc360_sdk::load_from_file(xpc_path);
        // auto intensity = xpc_data.intensity(); // And nearIr, range
        // Convert to cv::Mat or Eigen::Matrix
    
        // --- 2. Preprocess LiDAR Data (Placeholder) ---
        // Destagger, apply BeamUniformityCorrector using Ouster SDK C++ functions.
        // Apply CLAHE using OpenCV C++ functions, e.g.,
        // cv::Mat ref, nir; // Assume these are loaded
        // auto clahe = cv2::createCLAHE(2.0, cv::Size(8, 8));
        // clahe->apply(ref_raw, ref);
        // clahe->apply(nir_raw, nir);
    
        // --- 3. Prepare Tensor for Model ---
        cv::Mat nir; // Assume this is the preprocessed Near-IR image (64xW)
        cv::Mat input_blob;
        cv::cvtColor(nir, input_blob, cv::COLOR_GRAY2RGB); // Convert to 3-channel
        input_blob.convertTo(input_blob, CV_32F, 1.0 / 255.0); // Normalize
    
        torch::Tensor input_tensor = torch::from_blob(input_blob.data, {1, input_blob.rows, input_blob.cols, 3}, torch::kFloat32);
        input_tensor = input_tensor.permute({0, 3, 1, 2}); // HWC to CHW
        input_tensor = input_tensor.to(device_);
        
        // --- 4. Run Inference ---
        // The input format depends heavily on how the model was traced.
        // This assumes the `ModelWrapper` approach from the export script.
        // torch::IValue output = module_.forward({input_tensor.squeeze(0)}); 
        // auto detections_d2 = process_detections_d2(output);
        
        // --- 5. Motion Detection (Background Subtraction) ---
        // Use bg_subtractor_->apply(range_image, fg_mask);
        // Use cv::findContours or cv::connectedComponentsWithStats
        // to get motion detections.
        
        // --- 6. Fuse Detections ---
        // Implement the 'advanced_fuse_detections' logic in C++.
        
        // --- 7. Tracking (Placeholder) ---
        // DeepSort tracker;
        // auto tracks = tracker.update_tracks(fused_detections);
        
        // --- 8. Calculate Physics and Generate Output ---
        json output_json;
        // For each track, calculate centroid, transform to world coordinates (ECEF),
        // convert to LLH, calculate velocity, heading, etc.
        // Use Eigen for vector/matrix math and PROJ for coordinate transforms.
        // Populate a list of TrackedObject structs.
        
        // --- 9. Write JSON Output ---
        std::string out_path = output_dir + "/" + "output.json";
        std::ofstream o(out_path);
        o << std::setw(2) << output_json << std::endl;
        
        std::cout << "Finished processing " << xpc_path << std::endl;
    }
    
    std::vector<std::tuple<cv::Rect, float, int>> LidarProcessor::process_detections_d2(const torch::IValue& outputs) {
        // This function must parse the output of your TorchScript model.
        // The structure of 'outputs' depends entirely on your model's forward pass.
        // If it's a tensor of [x1, y1, x2, y2, score, class], you'd loop through it.
        std::vector<std::tuple<cv::Rect, float, int>> results;
    
        // Example assuming outputs.toTensor() works and has the shape [N, 6]
        // auto tensor = outputs.toTensor();
        // for (int i = 0; i < tensor.size(0); ++i) {
        //     int x1 = tensor[i][0].item<int>();
        //     // ... and so on
        //     results.emplace_back(cv::Rect(x1, y1, w, h), score, cls);
        // }
        return results;
    }
    
    

    main.cpp

    The entry point of the application, handling file watching and orchestrating the processing.

    C++

    #include "LidarProcessor.hpp"
    #include <iostream>
    #include <string>
    #include <vector>
    #include <filesystem>
    #include <chrono>
    #include <thread>
    
    namespace fs = std::filesystem;
    
    // A simple file watcher. For a robust solution, use platform-specific APIs
    // (inotify on Linux) or a dedicated library.
    void watch_directory(const std::string& path, LidarProcessor& processor, int start_angle, int end_angle) {
        std::cout << "Watching directory: " << path << std::endl;
        std::vector<std::string> processed_files;
    
        while (true) {
            for (const auto& entry : fs::directory_iterator(path)) {
                if (entry.path().extension() == ".xpc") {
                    std::string filename = entry.path().filename().string();
                    if (std::find(processed_files.begin(), processed_files.end(), filename) == processed_files.end()) {
                        std::cout << "New file detected: " << filename << std::endl;
                        try {
                            processor.process_file(entry.path().string(), "output_json", start_angle, end_angle);
                            processed_files.push_back(filename);
                        } catch (const std::exception& e) {
                            std::cerr << "Error processing " << filename << ": " << e.what() << std::endl;
                        }
                    }
                }
            }
            std::this_thread::sleep_for(std::chrono::seconds(2));
        }
    }
    
    int main() {
        const std::string MODEL_PATH = "models/detectron_model.pt";
        const std::string CONF_PATH = "data/xpc/frame00000000.conf";
        const std::string INS_PATH = "data/ins/ins_0001.csv";
        const std::string INPUT_DIR = "data/xpc";
        const std::string OUTPUT_DIR = "output_json";
    
        fs::create_directory(OUTPUT_DIR);
    
        // Get processing angle from user
        int start_angle = 0;
        int end_angle = 360;
        std::cout << "Enter LiDAR processing angle (e.g., '30-90' or '360'): ";
        std::string line;
        std::getline(std::cin, line);
        // Add parsing logic for the angle range here...
        
        try {
            LidarProcessor processor(MODEL_PATH, CONF_PATH, INS_PATH);
            
            // Process existing files first
            for (const auto& entry : fs::directory_iterator(INPUT_DIR)) {
                if (entry.path().extension() == ".xpc") {
                     processor.process_file(entry.path().string(), OUTPUT_DIR, start_angle, end_angle);
                }
            }
            
            // Start watching for new files
            watch_directory(INPUT_DIR, processor, start_angle, end_angle);
    
        } catch (const std::exception& e) {
            std::cerr << "A critical error occurred: " << e.what() << std::endl;
            return 1;
        }
    
        return 0;
    }
    

    in Daily blog
    # Xenomatix
    Leica LMF continued
    Copyright © CTech
    Nederlands (BE) | English (UK) | Français
    Aangeboden door Odoo - De #1 Open source e-commerce