Recent advances in large language models (LLMs) have significantly improved the quality of text representations, enabling breakthroughs in dense retrieval, semantic search, and a range of downstream natural language processing tasks. However, leveraging LLMs for effective text embeddings faces persistent challenges, including architectural constraints such as causal attention, misalignment between pre-training and embedding objectives, and limited support for multilingual scenarios. This research addresses these challenges through two complementary contributions. First, we introduce ULLME, a unified framework that enables bidirectional attention and supports diverse fine-tuning strategies-including our novel Generation-augmented Representation Learning (GRL), which aligns embedding and generation objectives to produce richer text embeddings. ULLME consistently outperforms previous methods across a wide range of benchmarks and LLM architectures. Second, we present LUSIFER, a zero-shot multilingual adaptation framework that integrates a multilingual encoder with an LLM-based embedding model via a lightweight connector. Without requiring multilingual supervision, LUSIFER achieves strong multilingual and cross-lingual performance, especially in medium and low-resource languages, as demonstrated on a comprehensive benchmark covering 123 datasets in 14 languages. Together, these contributions advance the state of the art in text representation learning with LLMs by providing both a flexible, high-performance embedding framework and a practical solution for multilingual and cross-lingual embedding tasks.