Skip to content
#

language-model

Here are 875 public repositories matching this topic...

transformers
haystack
ZanSara
ZanSara commented Mar 16, 2022

Problem
Currently FARMReader will ask users to raise max_seq_length every time some samples are longer than the value set to it. However, this can be confusing if max_seq_length is already set to the maximum value allowed by the model, because raising it further will cause hard-to-read CUDA errors.

See #2177.

Solution
We should find a way to query the model for the maximum va

type:feature good first issue topic:models journey:intermediate
yt605155624
yt605155624 commented Jan 6, 2022

目前的多音字使用 pypinyin 或者 g2pM,精度有限,想做一个基于 BERT (或者 ERNIE) 多音字预测模型,简单来说就是假设某语言有 100 个多音字,每个多音字最多有 3 个发音,那么可以在 BERT 后面接 100 个 3 分类器(简单的 fc 层即可),在预测时,找到对应的分类器进行分类即可。
参考论文:
tencent_polyphone.pdf

数据可以用 https://github.com/kakaobrain/g2pM 提供的数据

进阶:多任务的 BERT
![image](https://user-images.githubusercontent.com/24568452

Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)

  • Updated Feb 24, 2022
tonigi
tonigi commented Feb 15, 2022

Describe the bug
Setting "text-gen-type": "interactive" results in an IndexError: : shape mismatch: indexing tensors could not be broadcast together with shapes [4], [3]. Other generation types work.

To Reproduce
Steps to reproduce the behavior:

  1. Install, adapt 20B to local environment, add "text-gen-type": "interactive" config
  2. Run inference
  3. Enter arbitrary prompt when
bug good first issue

Improve this page

Add a description, image, and links to the language-model topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the language-model topic, visit your repo's landing page and select "manage topics."

Learn more