![]() ![]() nv = torch.nn.Conv1d(q_len, nv_out_size, self.kernel_size, self.stride) Self.embedding = torch.nn.Embedding(self.vocab_size, self.embed_size, padding_idx=self.pad_index) Self.hidden_layer_sizes = hidden_layer_sizes Stride=1, kernel_size=3, conv_out_size=64, hidden_layer_sizes=, dropout_rate=0.25): """ CNN-MLP with 1 Conv layer, 1 Max Pool layer, and 1 Linear layer. The CNN_MLP class is defined as follows:.I'm not sure if this mismatch is significant, but _available() seems to be working fine. nvidia-smi shows the CUDA version is 12.0. I'm using a single NVIDIA RTX A4000 GPU, and can confirm this with nvidia-smi.Zero_shot_metrics = inference(model, test_dataloader, device=vice) Metrics.append(calc_metrics(outputs, y_batch)) # This is a simplified version of the code here Outputs = model(X_batch) # Error raised here Print(next(model.parameters()).is_cuda) # returns True Print(_available()) # returns Trueĭef inference(model, test_dataloader, device='cpu'):įor _, (X_batch, y_batch) in enumerate(tqdm(test_dataloader)): I put the model and tensor on the GPU and made a few checks to ensure that they are both on the GPU. The error is RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm) I'm using a ConvNet built using PyTorch for inference and getting a Runtime Error in the following line: outputs = model(X_batch) ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |