Even though synthetic dataset are promising regarding users' data protection, in itself it does not bring guaranties regarding attribute inference attack. For future work we suggest that applying fairness regularization during the training of the generator could be a way to remove bias toward sensitive attributes. Concerning membership inference attack, synthetic data reduce the overall risk while still leaving an attack surface on some outliers points. Differential privacy is a way to reduce the risk on outliers but removing entirely the risk while keeping some level of utility is impossible. Hence more work in this direction is required.