Guys, settle down 🙂
I know we would all like this, but stop and think for a moment about the complexity of using 2 connect devices. How would you even know which one is accurate at any given time? The skeleton fed to the software by the device only supports facing, and when it doesn’t know, it pretty much makes it up.
Sit back, breath, and realise that doing this may not be even possible, and if it is possible, it won’t be easy. I think some people are misreading what greg said earlier in experiments. Promising probably simply means he can make 2 connects work with 2 different feeds, but combining them in a meaninful way, and without needing a supercomputer to temporally process previous sheleton positions to guess the current bones from previous movements using AI, well that may be another thing totally.
From a coding point of view, I can completely understand why he would choose to do PSVR first. It’s useless to me, but I can completely understand.