Assessing the exploitability of software vulnerabilities at the time of disclosure is difficult and error-prone, as features extracted via technical analysis by existing metrics are poor predictors for exploit development. Moreover, exploitability assessments suffer from a class bias because ‘not exploitable’ labels could be inaccurate. To overcome these challenges, we propose a new metric, called Expected Exploitability (EE), which reflects, over time, the likelihood that functional exploits will be developed. Key to our solution is a time-varying view of exploitability, a departure from existing metrics, which allows us to learn EE using data-driven techniques from artifacts published after disclosure, such as technical write-ups, proof-of-concept exploits, and social media discussions. Our analysis reveals that prior features proposed for related exploit prediction tasks are not always beneficial for predicting functional exploits, and we design novel feature sets to capitalize on previously under-utilized artifacts. This view also allows us to investigate the effect of the label biases on the classifiers. We characterize the noise-generating process for exploit prediction, showing that our problem is subject to class- and feature-dependent label noise, considered the most challenging type. By leveraging domain-specific observations, we then develop techniques to incorporate noise robustness into learning EE. On a dataset of 103,137 vulnerabilities, we show that EE increases precision from 49% to 86% over existing metrics, including two state-of-the-art exploit classifiers, while the performance of our metric also improving over time. EE scores capture exploitation imminence, by distinguishing exploits which are going to be developed in the near future. Finally, we show the practical utility of our system through a cyber-warfare game simulation, where EE improves the strategy of players using it instead of static metrics.