Factory#
A factory makes objects that compute FFTs.
To avoid needless repetition, only the single precision classes for NumPy
are documented below.
Of course, float16, float32, float64 and numpy, jax, torch,
tensorflow, and dlpack are all similar.
CLASSES
- class hpk.fft.FactoryCC_float32_numpy#
- FactoryCC_float32_numpy.makeInplace(self, layout: collections.abc.Sequence[hpk.fft.InplaceDim], batch: hpk.fft.InplaceDim = (1, 0)) hpk.fft.InplaceCC_float32#
- FactoryCC_float32_numpy.makeOoplace(self, layout: collections.abc.Sequence[hpk.fft.OoplaceDim], batch: hpk.fft.OoplaceDim = (1, 0, 0)) hpk.fft.OoplaceCC_float32#
- FactoryCC_float32_numpy.maxThreads(self) int#
An upper bound on the number of threads that could be used.
- FactoryCC_float32_numpy.nextFastLayout(self, layout: collections.abc.Sequence[hpk.fft.InplaceDim]) list[hpk.fft.InplaceDim]#
- FactoryCC_float32_numpy.nextFastLayout(self, layout: collections.abc.Sequence[hpk.fft.OoplaceDim]) list[hpk.fft.OoplaceDim]
- class hpk.fft.FactoryRC_float32_numpy#
- FactoryRC_float32_numpy.makeInplace(self, layout: collections.abc.Sequence[hpk.fft.InplaceDim], batch: hpk.fft.InplaceDim = (1, 0)) hpk.fft.InplaceRC_float32#
- FactoryRC_float32_numpy.makeOoplace(self, layout: collections.abc.Sequence[hpk.fft.OoplaceDim], batch: hpk.fft.OoplaceDim = (1, 0, 0)) hpk.fft.OoplaceRC_float32#
- FactoryRC_float32_numpy.maxThreads(self) int#
An upper bound on the number of threads that could be used.
- FactoryRC_float32_numpy.nextFastLayout(self, layout: collections.abc.Sequence[hpk.fft.InplaceDim]) list[hpk.fft.InplaceDim]#
- FactoryRC_float32_numpy.nextFastLayout(self, layout: collections.abc.Sequence[hpk.fft.OoplaceDim]) list[hpk.fft.OoplaceDim]