资源分配,小数GPU资源分配如何实现?

问题咨询!
ray支持 小数资源分配如:
@ray.remote(num_gpus=0.5)
class IOActor:
def ping(self):
import os

    print(f"CUDA_VISIBLE_DEVICES: {os.environ['CUDA_VISIBLE_DEVICES']}")

Two actors can share the same GPU.

io_actor1 = IOActor.remote()
io_actor2 = IOActor.remote()
ray.get(io_actor1.ping.remote())
ray.get(io_actor2.ping.remote())

Output:

(IOActor pid=96328) CUDA_VISIBLE_DEVICES: 1

(IOActor pid=96329) CUDA_VISIBLE_DEVICES: 1

问题是:

  1. 小数GPU资源分配如何实现呢?其资源隔离性如何实现的呢?
  2. 同NVIDIA的GPU Share、Mig拆卡有什么区别或者联系吗?