ray远程调用actor进程卡住120秒问题

有ABC三个actor部署在不同的节点上,AB都会调用C的函数,在大并发场景下对A节点注入网络100%丢包故障后,发现C actor进程有120s没有响应任何,包含C本身的10s周期打印以及B节点的调用请求都出现异常,请问下这种情况C出现了什么问题,是不是因为A(故障注入节点)的请求不能正常返回,导致了C actor被卡住了(grpc通信什么的),请各位大神帮忙看看,多谢
[2025-05-25 14:22:10,067 W 615 651] core_worker.cc:3304: Failed to report streaming generator return fa2faac0f65ed635cdb2f738d8b853cf876138490100000002000000 to the caller. The yield’ed ObjectRef may not be usable.

下面的日志task从接收到处理中间等了约60秒
[2025-05-25 14:21:17,351 D 615 651] core_worker.cc:3482: Received Handle Push Task 16c0f2615184bf48cdb2f738d8b853cf8761384901000000
[2025-05-25 14:21:17,352 D 615 33358] core_worker.cc:2915: Executing task, task info = Type=ACTOR_TASK, Language=PYTHON, Resources: {}, functi
[2025-05-25 14:22:10,085 D 615 33358] core_worker.cc:2870: Creating return object 16c0f2615184bf48cdb2f738d8b853cf876138490100000001000000
[2025-05-25 14:22:10,085 D 615 33358] core_worker.cc:3128: Sealing return object 16c0f2615184bf48cdb2f738d8b853cf876138490100000001000000
[2025-05-25 14:22:10,085 D 615 33358] core_worker.cc:3098: Finished executing task 16c0f2615184bf48cdb2f738d8b853cf8761384901000000, status=OK

/// grpc keepalive timeout for client.
RAY_CONFIG(int64_t, grpc_client_keepalive_timeout_ms, 120000)
问题定位了,还得靠自己…,是这个配置项导致的问题,C作为grpc client给A回复task处理结果,这里卡了120s,C里面的其他task都会等待,希望对大家有帮助