Typically, Monte-Carlo Tree Search (MCTS) actually is the go-to "solution" for such problems with large branching factors. I can understand that "vanilla" MCTS may still have unsatisfactory performance, but there is a plethora of extensions/enhancements available.
I don't have experience with the specific game you mentioned (Connect6), but from a quick look at how the game works, I imagine there will be a huge number of transpositions in the search tree (positions that are the same but can be reached through multiple different paths in the search tree). This will especially be very common if you treat placing one stone as a single ply; every "combined move" (of placing two stones in two positions subsequently) can be reached in two different ways, simply by switching the order in which the player places them. There has been research in using Transposition Tables with MCTS, so that may be a promising direction to look into.
I also suspect there will be great value in using Deep (Reinforcement) Learning approaches. If there is a large board on which to place stones, there will likely be many moves that are "absurd" and can easily be dismissed altogether by Deep Learning approaches (e.g., placing stones far away in a corner of the board where none of the "action" is going on). Vanilla MCTS, without Deep Learning extensions, will not be able to recognize and dismiss such absurd moves, and play them way too often (in Play-Out but also Selection phase due to the high branching factor). The most obvious source of inspiration here would be AlphaGo Zero.
Finally, there's definitely some published research on Connect6 AI (and even MCTS in Connect6). For example: Two-Stage Monte Carlo Tree Search for Connect6. You can likely also find more relevant research by checking that paper's list of References, and checking later papers on google scholar that cite this one.