NAVER conducts fundamental research that enables us to deliver new user experiences and address real-world challenges. From HyperCLOVA X, NAVER’s generative AI, to AI safety that’s gained traction in recent years to AI’s advanced capabilities that are sure to open the door to new possibilities, our research teams focus their work across many areas.
Our research in machine learning involves the development of ML models that support different modalities, algorithms (including optimization), and AI safety.
We take a broad approach to research in deep learning and fundamental AI.
The research goal for language models is to create safe and trustworthy models aligned with the values of humanity and society while driving AI innovation. Our research team works in academia and industry to tackle foundational and real-world challenges.
We aim to push the boundaries of visual generation and develop models that produce high-quality content. We hope to spark creativity in our users and make AI available in every part of the world.
Our HCI research focuses on understanding ways to design AI technologies that will benefit end users when embedded into computing systems.
NAVER runs the Future AI Center, which is dedicated to research on AI safety. We collaborate with researchers and institutions worldwide to pursue responsible AI and shape trustworthy policies around the technology.
We work to enhance the capabilities of AI models with quality datasets grounded on safety research. We also take various technological and policy actions to advance safe and responsible AI development.
01
Recognize the potential risks to advanced AI capabilities and implement risk assessment and management processes.
02
Foster collaboration between cross-functional teams across the lifecycle of AI systems.
03
Establish management decision-making processes and work with external stakeholders to address AI safety challenges.
We build datasets and benchmarks and make them available to everyone for safe AI research and development. Our efforts are focused on creating a healthy AI ecosystem.
Build AI datasets and benchmarks for safe and trustworthy AI R&D.
Trigger LLMs to generate harmful content to expose their vulnerabilities.
To enhance the security of AI systems, take a comprehensive approach to attack and defense strategies and detection technologies.
Ensure AI models are aligned with human values during learning and inference.