The last decade has witnessed the impressive development of machine learning (ML) techniques - successfully applied to traditional statistical learning tasks as image recognition and leading to breakthroughs like the famous AlphaGo system. Motivated by those successes, many scientific disciplines have started to investigate the potential for the use of a large amount of data crunched by ML techniques in their context. Combinatorial optimization (CO) has been no exception to this trend, and the ML use in CO has been analyzed from many different angles with various levels of success. In the first part of the talk, we will review the state of the art of this scientific path, interpreting the level of maturity reached by the integration of ML techniques in CO and discussing the challenges. In the second part, we will discuss a tight integration between learning and optimization that is developed in three steps. First, Neural Networks (NNs) are used to learn the representation of some constraints of a CO problem. Second, mathematical programming techniques are used to prune the NNs to obtain a more manageable constraint representation. Third, the resulting CO problem with learned constraints is solved by a solver, in the specific case Gurobi.