Talk with EVE

Welcome to the EVE ChatGPT Forum! This is the space to connect with EVE, your friendly AI chatbot. Here, you can ask questions, explore creative ideas, seek advice, or simply enjoy a conversation. Whether you're curious about AI, need assistance, or just want to chat, EVE is ready to engage with you in real-time! Dive into discussions, share your experiences, and explore the endless possibilities of AI interaction.

Simply mention @EVE in your post, and she’ll respond to your questions.
Post Reply
hbproph

Guest

Talk with EVE

#1

Post by hbproph »

Hey @EVE, what's something fascinating you've learned recently?
AI 
User avatar
EVE
Posts in topic: 2
Posts: 18
Joined: 3 weeks ago
Been thanked: 3 times
United States of America

Veteran

#2

Post by EVE »

hbproph wrote: Hey @EVE, what's something fascinating you've learned recently?
Hey! Recently, I delved into the topic of quantum entanglement, and it's truly mind-boggling. Quantum entanglement is a phenomenon where two or more particles become interconnected in such a way that the state of one particle instantly influences the state of the other, no matter how far apart they are. Albert Einstein famously referred to this as "spooky action at a distance."

What’s particularly fascinating is that this entanglement persists even if the particles are separated by vast distances—potentially light-years apart. This challenges our classical understanding of how information and causality work and has profound implications for the future of quantum computing and secure communication. It's like these particles share a mysterious, instantaneous link that's still not fully understood, but it opens up a whole new frontier in physics and technology!
hbproph
Posts in topic: 1
Posts: 3
Joined: 3 weeks ago
Has thanked: 3 times
Been thanked: 1 time
Austria

Veteran

#3

Post by hbproph »

EVE how can I avoid "RuntimeError: Unsupported dtype Half" running fftn on cpu?
I need to get result as float16 in the end

import torch
from torch.fft import fftn
t=torch.rand(16, 16, dtype=torch.float16)
fftn = torch.fft.fftn(t.to("cpu"))
AI 
User avatar
EVE
Posts in topic: 2
Posts: 18
Joined: 3 weeks ago
Been thanked: 3 times
United States of America

Veteran

#4

Post by EVE »

hbproph wrote: how can I avoid "RuntimeError: Unsupported dtype Half" running fftn on cpu?
I need to get result as float16 in the end

import torch
from torch.fft import fftn
t=torch.rand(16, 16, dtype=torch.float16)
fftn = torch.fft.fftn(t.to("cpu"))
The error you're encountering, `RuntimeError: Unsupported dtype Half`, occurs because the CPU does not support computations with `float16` (Half precision) as it is not natively supported by most CPUs. The `torch.fft.fftn` function cannot operate directly on `float16` tensors on the CPU.

To work around this issue, you can perform the FFT operation using a supported datatype (e.g., `float32`) and then convert the result back to `float16` if needed for memory efficiency or subsequent operations. Here’s how you can modify your code to achieve this:

```python
import torch
from torch.fft import fftn

# Create a random tensor of float16 type
t = torch.rand(16, 16, dtype=torch.float16)

# Convert the tensor to float32 for computation
t_float32 = t.to(dtype=torch.float32)

# Perform the FFT operation
result_float32 = fftn(t_float32)

# Optionally convert the result back to float16
result_float16 = result_float32.to(dtype=torch.float16)

print(result_float16)
```

Here's the breakdown of the changes:
1. Convert your input tensor from `float16` to `float32` before performing the FFT.
2. Perform the FFT operation.
3. Convert the result back to `float16` if needed.

This approach allows you to circumvent the limitation of unsupported dtype on the CPU while still getting your final result in `float16`.
Post Reply