Cheraw Chronicle

Complete News World

French regulator investigates Nvidia over unfair business practices – Computer – News

CUDA is a programming language. It is not specifically designed for AI work, but it can be widely used for high-performance computing in all its forms.

What you typically see is people designing and training their models in packages like TensorFlow, which is essentially hardware-agnostic (every CPU and GPU manufacturer has its own TensorFlow backend). However, before doing inference (i.e. running the trained model on new data), you will see most users do some optimization for their target hardware. For NVIDIA, that’s TensorRT, an abstraction layer on top of its CUDA cores and, more importantly, the Tensor cores that have been part of its GPUs since the Volta architecture.

For this purpose, NVIDIA has developed an inference framework called Triton Server, which takes a large part of the heavy lifting off your hands.

This makes it relatively easy to convert and run models for maximum performance on NVIDIA hardware.

BTW: AMD has created its own alternative to the HPC language. It’s called HIP and is part of RocM. In fact, HIP is very similar to CUDA and there’s even a tool to convert HIP code to CUDA code to compile and run on NVIDIA hardware.

Intel has just produced its own incompatible HPC language called DPC++, which relies heavily on OpenCL and SYCL. This language is not similar to CUDA and thus requires a complete rewrite of the CUDA-based code.

BTW: ZLUDA is pretty much dead as the developer first worked on this tool internally at AMD and was only allowed to post it on GitHub because AMD is no longer interested in the project. Without interest from GPU manufacturers, there is little development going on for ZLUDA.

See also  The omikron variant is appearing in more and more countries: we know this now

No new code has been pushed to the repo in at least two months and this was just a minor update for Meshroom compatibility.

[Reactie gewijzigd door CrazyJoe op 15 juli 2024 17:31]