Skip to content

Model training time increase more than 10x in some case #2412

@s1030512149

Description

@s1030512149

Environment

  • Pythonnet version: 3.0.3

  • Python version: 3.10.14

  • Environment Versions
    PyTorch version: 2.3.0+cpu
    Is debug build: False
    Python platform: Windows-10-10.0.22631-SP0
    Is CUDA available: False
    CUDA runtime version: 12.4.99
    CUDA_MODULE_LOADING set to: N/A
    GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
    Nvidia driver version: 556.12
    cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin\cudnn_ops_train64_8.dll
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    Is XNNPACK available: True

    Versions of relevant libraries:
    [pip3] numpy==1.26.4
    [pip3] torch==2.3.0
    [pip3] torchaudio==2.3.0
    [pip3] torchvision==0.18.0
    [conda] blas 1.0 mkl
    [conda] mkl 2021.4.0 pypi_0 pypi
    [conda] mkl-service 2.4.0 py310h2bbff1b_1
    [conda] mkl_fft 1.3.8 py310h2bbff1b_0
    [conda] mkl_random 1.2.4 py310h59b6b97_0
    [conda] numpy 1.26.4 py310h055cbcc_0
    [conda] numpy-base 1.26.4 py310h65a83cf_0
    [conda] torch 2.3.0 pypi_0 pypi
    [conda] torchaudio 2.3.0 pypi_0 pypi
    [conda] torchvision 0.18.0 pypi_0 pypi

  • Operating System:
    OS: Microsoft Windows 11
    GCC version: Could not collect
    Clang version: Could not collect
    CMake version: version 3.29.0-rc3
    Libc version: N/A

  • .NET Runtime:
    .NET framework 4.8

import torch
para =torch.Tensor(initial_para)
x,y  = torch.tensor(x),torch.relu(torch.tensor(y))
para.requires_grad = True
loss = torch.nn.MSELoss()
optimizer = torch.optim.Adam([para], lr=lr)
for i in `range(max_iter):`
    y_fit = multi_gaussian(x,para)
    l = loss(y_fit,y)
    try:
        if torch.isnan(l) or torch.isinf(l) :
            print("l is nan or inf")
            break
        l.backward()
        optimizer.step()
        optimizer.zero_grad()
    except:
        break
def multi_gaussian(x, params):
    y = torch.zeros_like(x)
    for i in range(0, params.shape[0], 3):
        g = gaussian(x, params[i], params[i + 1], params[i + 2])
        if torch.any(torch.isnan(g)) or torch.any(torch.isinf(g)):
            continue
        y = y + g
    return y
def gaussian(x,center,amp,variance, offset = None):
    return amp*torch.exp(-(x-center)**2/(variance ))

Details

Hi, im using pytorch to fit gaussian mixture model. It works perfect via directly run main.py. My data had length of 1024, and it only took less than 0.1 s to fit the model. However, a strange issue occurred when I using PythonNet package in CSharp. After the python environment was initialized, the main.py was also successfully runned. But the time for model fit increased to nearly 1.8 s, 18 times slower than previous condition.
While i used another fit methed named scipy.optimize.curve_fit method, it tooks almost the same time (0.03 s) as run main.py directly or by CSharp code call. I was so confused by the difference, and I want to know if there is way to solve the problem.

(By the way, im coding my program in net4.8 framework, i cant using torchsharp in Csharp which need net6.0).

Thank a lot!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions