赞
踩
got prompt
2 0 2 4 - 0 3 - 3 0 0 0 : 0 6 : 2 6 . 8 0 4 1 9 5 2 [ E : o n n x r u n t i m e : D e f a u l t , p r o v d e r _ b r i d g e _ o r t . c c : 1 5 3 4 o n n x r u n t i m e : : T r y G e t P r o v i d e r I n f o _ T e n s o r R T ] D : \ a \ _ w o r k \ 1 \ s \ o n n x r u n t i m e \ c o r e \ s e s s i o n \ p r o v i d e r _ b r i d g e _ o r t . c c : 1 2 0 9 o n n x r u n t i m e : : P r o v i d e r L i b r a r y : : G e t [ O N N X R u n t i m e E r r o r ] : 1 : F A I L : L o a d L i b r a r y f a i l e d w i t h e r r o r 1 2 6 " " w h e n t r y i n g t o l o a d " E : \ P r o g r a m D a t a \ a n a c o n d a 3 \ e n v s \ c o m f y \ L i b \ s i t e - p a c k a g e s \ o n n x r u n t i m e \ c a p i \ o n n x r u n t i m e _ p r o v i d e r s _ t e n s o r r t . d l l "
*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
#model = InferenceSession(name, providers=ort.get_available_providers())
providers = [
('CUDAExecutionProvider', {
'device_id': 0,
}),
'CPUExecutionProvider',
]
model = InferenceSession(name, providers=providers)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。