Untitled
unknown
plain_text
3 years ago
3.0 kB
23
Indexable
Starting the web UI...
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\bin\cudart64_110.dllCUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll...
Loading gpt4-x-alpaca-13b-native-4bit-128g...
Loading model ...
Done.
Traceback (most recent call last):
File "C:\Users\hexad\Documents\oobabooga-windows\text-generation-webui\server.py", line 302, in <module>
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\Users\hexad\Documents\oobabooga-windows\text-generation-webui\modules\models.py", line 102, in load_model
model = load_quantized(model_name)
File "C:\Users\hexad\Documents\oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 153, in load_quantized
model = model.to(torch.device('cuda:0'))
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 1888, in to
return super().to(*args, **kwargs)
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 844, in _apply
self._buffers[key] = fn(buf)
File "C:\Users\hexad\Documents\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB (GPU 0; 6.00 GiB total capacity; 5.07 GiB already allocated; 0 bytes free; 5.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Press any key to continue . . .Editor is loading...