When it comes to any new technology that Google releases, like the updated broad match to have more signals in conjunction with smart bidding, pmax, and anything in between, there is no fixed point in time after the release of said technology when it starts performing well. At some point, there is enough critical mass because advertisers have adopted and using this data Google is able to make their product exponentially better. This is another reason (besides obviously, increasing Google’s revenue) why their sales reps get targets and are incentivized to push advertisers to adopt these new products—to achieve that critical mass.
When smart bidding became available to the general audience in 2016 it initially underperformed, often losing in head-to-head A/B tests. Advertisers were understandably skeptical. Over time, Google collected more data and improved the product, which began to perform better in experiments. This feedback loop of data collection and improvement continued making the product exponentially better. Now smart bidding, given sufficient data, outperforms manual bidding in 90% of experiments (though recently I’ve seen some really thoughtful and interesting examples of manual CPC winning again).
Because there is no fixed point in time in when a new technology beats the status quo, and because we do not know when the product has reached critical mass, the idea is to have a continuous layer in your Google Ads setup with the sole focus of testing these new technologies, but with a low experimental budget. Doing these continuous, smaller tests and experiments over time that continuously follow up after each other allows you to identify exactly the point when or even IF the new technology starts to outperform and when it will do better. You keep it low risk until it starts to outperform and then scale it towards a higher evergreen budget.