Skip to main content

SignLLM

SignLLM
Get the code

Language

Python, Jupyter Notebook

Tool Type

Algorithm

License

Creative Commons Attribution 4.0

Version

default

About the tool Responsible

SignLLM Research Collective

SignLLM
What is it?

SignLLM is a language model designed for multilingual sign language production. It offers two modes, MLSF and Prompt2LangGloss, that generate sign language gestures from text and questions. It uses a reinforcement learning approach to improve the quality of training and data generation. For its development, SignLLM employs Prompt2Sign, a multilingual dataset that includes American Sign Language and seven others. This dataset standardizes information by extracting poses from videos in a unified format, achieving outstanding performance in sign language tasks.

What problems does it solve?

SignLLM solves the problem of generating gestures in the language of letters from consultation texts and questions, facilitating inclusive communication. Use a large multilingual language model and a new learning approach for strength to improve the quality and efficiency of training in other languages.

How does the tool work?

SignLLM is a language model designed for multilingual sign language production. It works using two modes, MLSF and Prompt2LangGloss, which generate gestures from text and questions. It uses reinforcement learning to improve training, supported by the Prompt2Sign dataset, which standardizes pose information from videos in eight sign languages.

Open standards

SignLLM solves the problem of generating sign language gestures from query texts and questions, facilitating inclusive communication. It uses a multilingual large language model and a novel reinforcement learning approach to improve training quality and efficiency in eight sign languages.

Sector
Science and Technology
Education
Functionality
Methodological resources
Sustainable development goals
Quality education
Industry innovation and infrastructure
Reduced inequalities
hands
Get the code for this project
Get the code

Connect with the Development Code team and discover how our carefully curated open source tools can support your institution in Latin America and the Caribbean. Contact us to explore solutions, resolve implementation issues, share reuse successes or present a new tool. Write to [email protected]

Contact us
GitHub Popularity Growth of SignLLM/Prompt2Sign GitHub Popularity Growth of SignLLM/Prompt2Sign

SignLLM has gained over 160 GitHub stars since launch, showing strong interest in multilingual sign language generation.

Data Preparation for Training with Prompt2Sign Data Preparation for Training with Prompt2Sign

This notebook walks through data preparation from OpenPose JSON files, cloning the Prompt2Sign repo, and setting up the environment to generate poses from sign language videos.

Prompt2Sign: Dataset and SignLLM Outputs for Sign Language Generation Prompt2Sign: Dataset and SignLLM Outputs for Sign Language Generation

Prompt2Sign includes text, prompts, video frames, and compressed 3D poses across 8 sign languages. SignLLM generates digital pose data from text or prompts, outputting synthetic videos or 3D models.

Pavimentados
Optimizing road maintenance and signaling with computer vision.

Transport
Geolocation
Image processing
UrbanPy
Simplifying urban data collection and analysis for effective planning.

Urban Development and Housing
Geolocation
Database management
Víasegura
Improving road safety with automatic problem detection.

Transport
Simulators
Congestiómetro
Improving urban mobility with real-time traffic analysis.

Transport
Geolocation
Distancia2
Using AI to improve the management of social distancing in pandemics.

Reform or Modernization of the State
Image processing
see all tools
hands
Deepen your knowledge on the implementation of tools in the public sector with our courses, guides and many other resources.
Be part of the community
Jump back to top