Seguir
Kang Min Yoo
Kang Min Yoo
NAVER Hyperscale AI & AI Lab
Dirección de correo verificada de navercorp.com
Título
Citado por
Citado por
Año
Gpt3mix: Leveraging large-scale language models for text augmentation
KM Yoo, D Park, J Kang, SW Lee, W Park
arXiv preprint arXiv:2104.08826, 2021
2402021
TaleBrush: Sketching stories with generative pretrained language models
JJY Chung, W Kim, KM Yoo, H Lee, E Adar, M Chang
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022
2312022
Learning to compose task-specific tree structures
J Choi, KM Yoo, S Lee
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
230*2018
Self-guided contrastive learning for BERT sentence representations
T Kim, KM Yoo, S Lee
arXiv preprint arXiv:2106.07345, 2021
2152021
What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers
B Kim
arXiv preprint arXiv:2109.04650, 2021
1162021
Ground-truth labels matter: A deeper look into input-label demonstrations
KM Yoo, J Kim, HJ Kim, H Cho, H Jo, SW Lee, S Lee, T Kim
arXiv preprint arXiv:2205.12685, 2022
882022
Data augmentation for spoken language understanding via joint variational generation
KM Yoo, Y Shin, S Lee
Proceedings of the AAAI conference on artificial intelligence 33 (01), 7402-7409, 2019
862019
Dialogbert: Discourse-aware response generation via learning to recover and rank utterances
X Gu, KM Yoo, JW Ha
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12911 …, 2021
812021
Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization
J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee
Advances in Neural Information Processing Systems 36, 2024
772024
Aligning large language models through synthetic feedback
S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo
arXiv preprint arXiv:2305.13735, 2023
572023
Self-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator
HJ Kim, H Cho, J Kim, T Kim, KM Yoo, S Lee
arXiv preprint arXiv:2206.08082, 2022
562022
Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models
SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ...
arXiv preprint arXiv:2210.03858, 2022
332022
Mutual information divergence: A unified metric for multimodal generative models
JH Kim, Y Kim, J Lee, KM Yoo, SW Lee
Advances in Neural Information Processing Systems 35, 35072-35086, 2022
272022
Critic-guided decoding for controlled text generation
M Kim, H Lee, KM Yoo, J Park, H Lee, K Jung
arXiv preprint arXiv:2212.10938, 2022
242022
Response generation with context-aware prompt learning
X Gu, KM Yoo, SW Lee
arXiv preprint arXiv:2111.02643, 2021
242021
Leveraging class hierarchy in fashion classification
H Cho, C Ahn, K Min Yoo, J Seol, S Lee
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
242019
Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners
H Cho, HJ Kim, J Kim, SW Lee, S Lee, KM Yoo, T Kim
Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12709 …, 2023
212023
Variational hierarchical dialog autoencoder for dialog state tracking data augmentation
KM Yoo, H Lee, F Dernoncourt, T Bui, W Chang, S Lee
arXiv preprint arXiv:2001.08604, 2020
192020
Kmmlu: Measuring massive multitask language understanding in korean
G Son, H Lee, S Kim, S Kim, N Muennighoff, T Choi, C Park, KM Yoo, ...
arXiv preprint arXiv:2402.11548, 2024
182024
Generating information-seeking conversations from unlabeled documents
G Kim, S Kim, KM Yoo, J Kang
arXiv preprint arXiv:2205.12609, 2022
16*2022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20