Learning word representations from large corpora relies on the distributional hypothesis that words present in similar contexts tend to have similar meanings. Recent work has shown that word representations learnt in this manner lack sentiment information which, can be introduced using external knowledge. Our work addresses the question: Can affect lexica improve word representations learnt from a corpus ? In this work, we propose techniques to incorporate affect lexica, which capture fine-grained information about a word's psycholinguistic and emotion orientation, into the training for Word2Vec SkipGram, Word2Vec CBOW, and GloVe using a Joint Learning approach. We use affect scores from the Warriner's affect lexicon to regularize the vector representations learnt from an unlabeled corpus. Our proposed method outperforms previously methods on standard tasks for word similarity detection, outlier detection, and sentiment analysis. We also show the usefulness of our approach for the prediction of formality, frustration, and politeness in text.