To run this project, you need to locally deploy both a 7B reasoning model and a 7B prover using vLLM. We recommend using a server with at least 2 NVIDIA GPUs for optimal performance.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results