Live Quiz Arena
🎁 1 Free Round Daily
⚡ Enter ArenaQuestion
← Language & CommunicationWhy does a language model trained on text-to-speech sometimes produce unexpected emphasis on certain words during spoken output?
A)Incomplete data augmentation degrades signal
B)Prosodic transfer induces pitch errors✓
C)WaveNet vocoders amplify spectral artifacts
D)Attention mechanisms reduce semantic coherence
💡 Explanation
Prosodic transfer explains the phenomenon because pitch patterns from the training data are inadvertently applied, leading to misplaced emphasis; therefore, the model emphasizes words according to learned, often inappropriate, melodic contours, rather than contextual semantics or syntax. The problem is prosodic transfer, rather than data augmentation.
🏆 Up to £1,000 monthly prize pool
Ready for the live challenge? Join the next global round now.
*Terms apply. Skill-based competition.
Related Questions
Browse Language & Communication →- A compiler optimizing code parses a complex expression. If the syntax tree's depth exceeds a predefined limit during semantic analysis, which consequence follows?
- Which property explains why Haitian Creole exhibits grammatical structures derived from West African languages rather than exclusively from French, its lexifier language?
- Which consequence results when a laryngeal consonant weakens through lenition?
- Why does flapping (t →ɾ) in American English occur in 'butter' but not in 'tub'?
- Why does a language acquisition model relying solely on distributional semantics for dictionary creation struggle with polysemous words, even with a large corpus?
- Why does phonetic adaptation between dialects involve modifying phoneme boundaries, rather than simply adopting new, distinct phonemes?
