OpenCores
no use no use 1/1 no use no use
Is it possible to redesign GPGPUs to handle I/O calls?
by Manili on Oct 20, 2017
Manili
Posts: 12
Joined: May 2, 2014
Last seen: Aug 26, 2019
Hi all,

Considering advances in manufacturing GPGPUs, Is it possible to redesign them to handle basic I/O calls?
I saw some articles about GPU based filesystem or socket programming using power of GPUs. AFAIK none of these articles talked about any changes in GPU architecture. But I think doing something like this needs both HW and SW support.

BTW, do you guys agree that it is possible to use GPUs (no matter how the architecture will change) for handling I/O calls?

Thanks
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by aikijw on Oct 21, 2017
aikijw
Posts: 76
Joined: Oct 21, 2011
Last seen: Jul 8, 2023
(1) The primary focus of both this site and the forum is open source FPGA cores... Not GPUs... If you'd like to talk about implementing a GPU, using an FPGA... That's probably appropriate...

(2) I'm not sure you understand what a GPU is/does, nor is it clear you understand what an "I/O call" is... Is this some kind of take home test question?




Hi all,

Considering advances in manufacturing GPGPUs, Is it possible to redesign them to handle basic I/O calls?
I saw some articles about GPU based filesystem or socket programming using power of GPUs. AFAIK none of these articles talked about any changes in GPU architecture. But I think doing something like this needs both HW and SW support.

BTW, do you guys agree that it is possible to use GPUs (no matter how the architecture will change) for handling I/O calls?

Thanks
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by Manili on Oct 21, 2017
Manili
Posts: 12
Joined: May 2, 2014
Last seen: Aug 26, 2019
(1) The primary focus of both this site and the forum is open source FPGA cores... Not GPUs... If you'd like to talk about implementing a GPU, using an FPGA... That's probably appropriate...

(2) I'm not sure you understand what a GPU is/does, nor is it clear you understand what an "I/O call" is... Is this some kind of take home test question?




Hi all,

Considering advances in manufacturing GPGPUs, Is it possible to redesign them to handle basic I/O calls?
I saw some articles about GPU based filesystem or socket programming using power of GPUs. AFAIK none of these articles talked about any changes in GPU architecture. But I think doing something like this needs both HW and SW support.

BTW, do you guys agree that it is possible to use GPUs (no matter how the architecture will change) for handling I/O calls?

Thanks


No bro, this is not a "HOME TEST QUESTION". I asked this question as an idea came to me the other day for implementing a new type of GPGPU on FPGA using open source projects like MIAOW.
The idea became more robust when I read following articles:
1. https://doi.org/10.1145/2451116.2451169
2. https://doi.org/10.1145/2884045.2884053

Best regards,

P.S. I can't understand why did you blow me away just like that! I was just asking a newbie's question, which I did not find the answer anywhere else (even in https://quora.com). It would be my pleasure if you help me find anywhere else to ask this silly question.
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by rherveille on Oct 23, 2017
rherveille
Posts: 33
Joined: Sep 25, 2001
Last seen: May 31, 2018
(1) The primary focus of both this site and the forum is open source FPGA cores... Not GPUs... If you'd like to talk about implementing a GPU, using an FPGA... That's probably appropriate...

That's the most stupid reply I've ever read.
This site is about OpenSource IP and whatever relates to that. Just because you can't think any further than FPGAs, doesn't mean that's the intent of the site.
I find the original poster's question quite intriguing. Looking into changing the GPU architecture to provide IO features is a related to a topic of research that is ongoing. In the sense that modern GPUs are really multi-core processing engines. Typically they are arranged in columns or arrays. Nothing prevents you from replacing one of those processing engines with an IO engine.

Richard
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by aikijw on Oct 23, 2017
aikijw
Posts: 76
Joined: Oct 21, 2011
Last seen: Jul 8, 2023
Richard,

ROTFL... Your description of GPU architectures is preciously sophomoric...

Feel better now?

The focus of this site is on "IP Cores"... Not a thing to do with GPUs, unless, as I clearly stated, we are talking about a GPU core... The "intent of the site" is clearly defined in the FAQ... Ill grant you that I should have included ASICs, but perhaps you'd like to enlighten us with your enhanced application regime for "IP cores"... Other than FPGAs and ASICs, with the exception of "Richard's imagination", IP Cores, in my opinion, have little utility...

I wonder (Im "waxing rhetorical", BTW, lest you take me literally... I'm "wondering" about a myriad of things at the moment... None of them have anything to do with you... "Richard"...) which of the two of us actually has experience with both FPGA and GPU development? Modern GPUs are not "multicore processing engines"... [SMH] When you bother to develop something other than a Wikipedic understanding of any of these topics, perhaps I'll care about what you think... Run along now... Go sort out why prempting a linear processor to service an "I/O call" is not such a great idea... (i tried to work the phrase "more most stupid" into my response, but... coffee awaits!)

Good day, Richard...



(1) The primary focus of both this site and the forum is open source FPGA cores... Not GPUs... If you'd like to talk about implementing a GPU, using an FPGA... That's probably appropriate...

That's the most stupid reply I've ever read.
This site is about OpenSource IP and whatever relates to that. Just because you can't think any further than FPGAs, doesn't mean that's the intent of the site.
I find the original poster's question quite intriguing. Looking into changing the GPU architecture to provide IO features is a related to a topic of research that is ongoing. In the sense that modern GPUs are really multi-core processing engines. Typically they are arranged in columns or arrays. Nothing prevents you from replacing one of those processing engines with an IO engine.

Richard
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by inflector on Oct 23, 2017
inflector
Posts: 6
Joined: Aug 28, 2017
Last seen: May 31, 2018
The only advantage of a GPU versus an FPGA or ASIC is that they are cheap and readily available because their market and therefore the market is huge. That they have relatively few general-purpose cores available is a side effect of the need to control a large number of parallel arithmetic units (shaders). GPUs and highly-parallel CPUs are most-often bottlenecked by memory bandwidth which is one to two orders of magnitude faster than I/O coming from a fast SSD.

The fact that some people are using GPUs to do research for parallel I/O is due to their being cheap and readily available, not because they are ideal for the purpose.

If a problem is I/O bound then having a lot of processors working on it won't speed it up until and unless you can fill memory faster than the GPU or parallel processing ASIC or FPGA can consume it. Building parallel processing for I/O handling would be useful for improving I/O speed but would not improve the processing for anything that is not I/O bound. GPUs are improving in speed because they marry improvements in memory bandwidth and speed with concomitant improvements in parallel processing sufficient to take advantage of the increased memory bandwidth. There's no point in adding new processors if you can't feed them with work.

So I found aikijw's original comment to be on point and not excessively harsh. I too did a WTF? when I read Manili' question initially and found aikijw's response to be very reasonable considering the lack of understanding implied by the original question.

Richard, you escalated the discussion to overt ad-hominem attack. There is no need for this.

RE: Is it possible to redesign GPGPUs to handle I/O calls?
by tbernath on Oct 23, 2017
tbernath
Posts: 4
Joined: Jun 9, 2008
Last seen: Feb 26, 2023
The only advantage of a GPU versus an FPGA or ASIC is that they are cheap and readily available because their market and therefore the market is huge. That they have relatively few general-purpose cores available is a side effect of the need to control a large number of parallel arithmetic units (shaders). GPUs and highly-parallel CPUs are most-often bottlenecked by memory bandwidth which is one to two orders of magnitude faster than I/O coming from a fast SSD.

The fact that some people are using GPUs to do research for parallel I/O is due to their being cheap and readily available, not because they are ideal for the purpose.

If a problem is I/O bound then having a lot of processors working on it won't speed it up until and unless you can fill memory faster than the GPU or parallel processing ASIC or FPGA can consume it. Building parallel processing for I/O handling would be useful for improving I/O speed but would not improve the processing for anything that is not I/O bound. GPUs are improving in speed because they marry improvements in memory bandwidth and speed with concomitant improvements in parallel processing sufficient to take advantage of the increased memory bandwidth. There's no point in adding new processors if you can't feed them with work.

So I found aikijw's original comment to be on point and not excessively harsh. I too did a WTF? when I read Manili' question initially and found aikijw's response to be very reasonable considering the lack of understanding implied by the original question.

Richard, you escalated the discussion to overt ad-hominem attack. There is no need for this.



Well, having implemented a joined FPGA/GPU network application project a few years ago, I respectfully partially disagree. GPUs main advantage is their crazy scalable number of cores, and their massive memory throughput. GPUs *are* limited for communication outside their space, but there are ways to significantly mitigate that. The real challenge is inverting your problem from a thousand (CPU-like) threads invoking random I/O functions to having thousands of contexts (data values) invoking a single operation. As long as you are trying to make a GPU act like a CPU, it won't compete. We leveraged the FPGA to 'groom' the I/O to be 'GPU friendly', and suddenly, it's possible to throw 20,000+ cores at an application. This opens the door for 40/100Gb applications, where CPUs cant compete without SR-IOV or similar.

You can PM me at: 'D-1luz9dvo@maildrop.cc'

T
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by olof on Oct 23, 2017
olof
Posts: 218
Joined: Feb 10, 2010
Last seen: Dec 17, 2018
Hi,

As Richard and others have said, it's an interesting topic. We already have DSP processors that handle I/O in embedded systems, and GPU isn't that different. You might want to check out Nyuzi (https://www.librecores.org/jbush/nyuziprocessor). That one should handle some I/O as well

//Olof
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by rherveille on Oct 24, 2017
rherveille
Posts: 33
Joined: Sep 25, 2001
Last seen: May 31, 2018

Richard, you escalated the discussion to overt ad-hominem attack. There is no need for this.


I did not attack the person, at least I didn't intent to do so. Unlike the response that followed. I only said the reply was stupid and I stand by that. I don't see why working with/on/extending GPUs can't be discussed on OpenCores? Minion cores are used, nowadays, to implement IO functions. That overlaps software and hardware, so why can't that be discussed here? Shutting down the original poster isn't the right approach. The question was valid. This site is very useful for learning new topics. So asking, in other people's views, silly questions is part of it. Getting shut down immediately doesn't invite to ask further/new questions. But hey, if that's where this site is going, fine.

Richard
RE: Is it possible to redesign GPGPUs to handle I/O calls?
by aikijw on Oct 24, 2017
aikijw
Posts: 76
Joined: Oct 21, 2011
Last seen: Jul 8, 2023
Richard,

I think you need to review your post. You did issue a personal attack...

My original response, while terse, in no way justified your hyperbolic response, including the rather condescending implication that I lack "imagination". This last response is also pretty over the top...

Its not clear to me why you are even involved, but I apologize for offending you... This will be the last I post on this topic...

Best Regards,

/jw






I did not attack the person, at least I didn't intent to do so. Unlike the response that followed. I only said the reply was stupid and I stand by that. I don't see why working with/on/extending GPUs can't be discussed on OpenCores? Minion cores are used, nowadays, to implement IO functions. That overlaps software and hardware, so why can't that be discussed here? Shutting down the original poster isn't the right approach. The question was valid. This site is very useful for learning new topics. So asking, in other people's views, silly questions is part of it. Getting shut down immediately doesn't invite to ask further/new questions. But hey, if that's where this site is going, fine.

Richard
no use no use 1/1 no use no use
© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.