ESpeak-ng: speech synthesizer with more than one hundred languages and accents

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • piper

    A fast, local neural text to speech system (by rhasspy)

  • Based on my own recent experience[0] with espeak-ng, IMO the project is currently in a really tough situation[3]:

    * the project seems to provide real value to a huge number of people who rely on it for reasons of accessibility (even more so for non-English languages); and,

    * the project is a valuable trove of knowledge about multiple languages--collected & refined over multiple decades by both linguistic specialists and everyday speakers/readers; but...

    * the project's code base is very much of "a different era" reflecting its mid-90s origins (on RISC OS, no less :) ) and a somewhat piecemeal development process over the following decades--due in part to a complex Venn diagram of skills, knowledge & familiarity required to make modifications to it.

    Perhaps the prime example of the last point is that `espeak-ng` has a hand-rolled XML parser--which attempts to handle both valid & invalid SSML markup--and markup parsing is interleaved with internal language-related parsing in the code. And this is implemented in C.

    [Aside: Due to this I would strongly caution against feeding "untrusted" input to espeak-ng in its current state but unfortunately that's what most people who rely on espeak-ng for accessibility purposes inevitably do while browsing the web.]

    [TL;DR: More detail/repros/observations on espeak-ng issues here:

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    ]

    Contributors to the project are not unaware of the issues with the code base (which are exacerbated by the difficulty of even tracing the execution flow in order to understand how the library operates) nor that it would benefit from a significant refactoring effort.

    However as is typical with such projects which greatly benefit individual humans but don't offer an opportunity to generate significant corporate financial return, a lack of developers with sufficient skill/knowledge/time to devote to a significant refactoring means a "quick workaround" for an specific individual issue is often all that can be managed.

    This is often exacerbated by outdated/unclear/missing documentation.

    IMO there are two contribution approaches that could help the project moving forward while requiring the least amount of specialist knowledge/experience:

    * Improve visibility into the code by adding logging/tracing to make it easier to see why a particular code path gets taken.

    * Integrate an existing XML parser as a "pre-processor" to ensure that only valid/"sanitized"/cleaned-up XML is passed through to the SSML parsing code--this would increase robustness/safety and facilitate future removal of XML parsing-specific workarounds from the code base (leading to less tangled control flow) and potentially future removal/replacement of the entire bespoke XML parser.

    Of course, the project is not short on ideas/suggestions for how to improve the situation but, rather, direct developer contributions so... shrug

    In light of this, last year when I was developing the personal project[0] which made use of a dependency that in turn used espeak-ng I wanted to try to contribute something more tangible than just "ideas" so began to write-up & create reproductions for some of the issues I encountered while using espeak-ng and at least document the current behaviour/issues I encountered.

    Unfortunately while doing so I kept encountering new issues which would lead to the start of yet another round of debugging to try to understand what was happening in the new case.

    Perhaps inevitably this effort eventually stalled--due to a combination of available time, a need to attempt to prioritize income generation opportunities and the downsides of living with ADHD--before I was able to share the fruits of my research. (Unfortunately I seem to be way better at discovering & root-causing bugs than I am at writing up the results...)

    However I just now used the espeak-ng project being mentioned on HN as a catalyst to at least upload some of my notes/repros to a public repo (see links in TLDR section above) in that hopes that maybe they will be useful to someone who might have the time/inclination to make a more direct code contribution to the project. (Or, you know, prompt someone to offer to fund my further efforts in this area... :) )

    [0] A personal project to "port" my "Dialogue Tool for Larynx Text To Speech" project[1] to use the more recent Piper TTS[2] system which makes use of espeak-ng for transforming text to phonemes.

    [1] https://rancidbacon.itch.io/dialogue-tool-for-larynx-text-to... & https://gitlab.com/RancidBacon/larynx-dialogue/-/tree/featur...

    [2] https://github.com/rhasspy/piper

    [3] Very much no shade toward the project intended.

  • espeak-ng

    eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents.

  • After some brief research it seems the issue you're seeing may be a known bug in at least some versions/release of espeak-ng.

    Here's some potentially related links if you'd like to dig deeper:

    * "questions about mandarin data packet #1044": https://github.com/espeak-ng/espeak-ng/issues/1044

    * "ESpeak NJ-1.51’s Mandarin pronunciation is corrupted #12952": https://github.com/nvaccess/nvda/issues/12952

    * "The pronunciation of Mandarin Chinese using ESpeak NJ in NVDA is not normal #1028": https://github.com/espeak-ng/espeak-ng/issues/1028

    * "When espeak-ng translates Chinese (cmn), IPA tone symbols are not output correctly #305": https://github.com/rhasspy/piper/issues/305

    * "Please default ESpeak NG's voice role to 'Chinese (Mandarin, latin as Pinyin)' for Chinese to fix #12952 #13572": https://github.com/nvaccess/nvda/issues/13572

    * "Cmn voice not correctly translated #1370": https://github.com/espeak-ng/espeak-ng/issues/1370

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • Based on my own recent experience[0] with espeak-ng, IMO the project is currently in a really tough situation[3]:

    * the project seems to provide real value to a huge number of people who rely on it for reasons of accessibility (even more so for non-English languages); and,

    * the project is a valuable trove of knowledge about multiple languages--collected & refined over multiple decades by both linguistic specialists and everyday speakers/readers; but...

    * the project's code base is very much of "a different era" reflecting its mid-90s origins (on RISC OS, no less :) ) and a somewhat piecemeal development process over the following decades--due in part to a complex Venn diagram of skills, knowledge & familiarity required to make modifications to it.

    Perhaps the prime example of the last point is that `espeak-ng` has a hand-rolled XML parser--which attempts to handle both valid & invalid SSML markup--and markup parsing is interleaved with internal language-related parsing in the code. And this is implemented in C.

    [Aside: Due to this I would strongly caution against feeding "untrusted" input to espeak-ng in its current state but unfortunately that's what most people who rely on espeak-ng for accessibility purposes inevitably do while browsing the web.]

    [TL;DR: More detail/repros/observations on espeak-ng issues here:

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    ]

    Contributors to the project are not unaware of the issues with the code base (which are exacerbated by the difficulty of even tracing the execution flow in order to understand how the library operates) nor that it would benefit from a significant refactoring effort.

    However as is typical with such projects which greatly benefit individual humans but don't offer an opportunity to generate significant corporate financial return, a lack of developers with sufficient skill/knowledge/time to devote to a significant refactoring means a "quick workaround" for an specific individual issue is often all that can be managed.

    This is often exacerbated by outdated/unclear/missing documentation.

    IMO there are two contribution approaches that could help the project moving forward while requiring the least amount of specialist knowledge/experience:

    * Improve visibility into the code by adding logging/tracing to make it easier to see why a particular code path gets taken.

    * Integrate an existing XML parser as a "pre-processor" to ensure that only valid/"sanitized"/cleaned-up XML is passed through to the SSML parsing code--this would increase robustness/safety and facilitate future removal of XML parsing-specific workarounds from the code base (leading to less tangled control flow) and potentially future removal/replacement of the entire bespoke XML parser.

    Of course, the project is not short on ideas/suggestions for how to improve the situation but, rather, direct developer contributions so... shrug

    In light of this, last year when I was developing the personal project[0] which made use of a dependency that in turn used espeak-ng I wanted to try to contribute something more tangible than just "ideas" so began to write-up & create reproductions for some of the issues I encountered while using espeak-ng and at least document the current behaviour/issues I encountered.

    Unfortunately while doing so I kept encountering new issues which would lead to the start of yet another round of debugging to try to understand what was happening in the new case.

    Perhaps inevitably this effort eventually stalled--due to a combination of available time, a need to attempt to prioritize income generation opportunities and the downsides of living with ADHD--before I was able to share the fruits of my research. (Unfortunately I seem to be way better at discovering & root-causing bugs than I am at writing up the results...)

    However I just now used the espeak-ng project being mentioned on HN as a catalyst to at least upload some of my notes/repros to a public repo (see links in TLDR section above) in that hopes that maybe they will be useful to someone who might have the time/inclination to make a more direct code contribution to the project. (Or, you know, prompt someone to offer to fund my further efforts in this area... :) )

    [0] A personal project to "port" my "Dialogue Tool for Larynx Text To Speech" project[1] to use the more recent Piper TTS[2] system which makes use of espeak-ng for transforming text to phonemes.

    [1] https://rancidbacon.itch.io/dialogue-tool-for-larynx-text-to... & https://gitlab.com/RancidBacon/larynx-dialogue/-/tree/featur...

    [2] https://github.com/rhasspy/piper

    [3] Very much no shade toward the project intended.

  • Based on my own recent experience[0] with espeak-ng, IMO the project is currently in a really tough situation[3]:

    * the project seems to provide real value to a huge number of people who rely on it for reasons of accessibility (even more so for non-English languages); and,

    * the project is a valuable trove of knowledge about multiple languages--collected & refined over multiple decades by both linguistic specialists and everyday speakers/readers; but...

    * the project's code base is very much of "a different era" reflecting its mid-90s origins (on RISC OS, no less :) ) and a somewhat piecemeal development process over the following decades--due in part to a complex Venn diagram of skills, knowledge & familiarity required to make modifications to it.

    Perhaps the prime example of the last point is that `espeak-ng` has a hand-rolled XML parser--which attempts to handle both valid & invalid SSML markup--and markup parsing is interleaved with internal language-related parsing in the code. And this is implemented in C.

    [Aside: Due to this I would strongly caution against feeding "untrusted" input to espeak-ng in its current state but unfortunately that's what most people who rely on espeak-ng for accessibility purposes inevitably do while browsing the web.]

    [TL;DR: More detail/repros/observations on espeak-ng issues here:

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    ]

    Contributors to the project are not unaware of the issues with the code base (which are exacerbated by the difficulty of even tracing the execution flow in order to understand how the library operates) nor that it would benefit from a significant refactoring effort.

    However as is typical with such projects which greatly benefit individual humans but don't offer an opportunity to generate significant corporate financial return, a lack of developers with sufficient skill/knowledge/time to devote to a significant refactoring means a "quick workaround" for an specific individual issue is often all that can be managed.

    This is often exacerbated by outdated/unclear/missing documentation.

    IMO there are two contribution approaches that could help the project moving forward while requiring the least amount of specialist knowledge/experience:

    * Improve visibility into the code by adding logging/tracing to make it easier to see why a particular code path gets taken.

    * Integrate an existing XML parser as a "pre-processor" to ensure that only valid/"sanitized"/cleaned-up XML is passed through to the SSML parsing code--this would increase robustness/safety and facilitate future removal of XML parsing-specific workarounds from the code base (leading to less tangled control flow) and potentially future removal/replacement of the entire bespoke XML parser.

    Of course, the project is not short on ideas/suggestions for how to improve the situation but, rather, direct developer contributions so... shrug

    In light of this, last year when I was developing the personal project[0] which made use of a dependency that in turn used espeak-ng I wanted to try to contribute something more tangible than just "ideas" so began to write-up & create reproductions for some of the issues I encountered while using espeak-ng and at least document the current behaviour/issues I encountered.

    Unfortunately while doing so I kept encountering new issues which would lead to the start of yet another round of debugging to try to understand what was happening in the new case.

    Perhaps inevitably this effort eventually stalled--due to a combination of available time, a need to attempt to prioritize income generation opportunities and the downsides of living with ADHD--before I was able to share the fruits of my research. (Unfortunately I seem to be way better at discovering & root-causing bugs than I am at writing up the results...)

    However I just now used the espeak-ng project being mentioned on HN as a catalyst to at least upload some of my notes/repros to a public repo (see links in TLDR section above) in that hopes that maybe they will be useful to someone who might have the time/inclination to make a more direct code contribution to the project. (Or, you know, prompt someone to offer to fund my further efforts in this area... :) )

    [0] A personal project to "port" my "Dialogue Tool for Larynx Text To Speech" project[1] to use the more recent Piper TTS[2] system which makes use of espeak-ng for transforming text to phonemes.

    [1] https://rancidbacon.itch.io/dialogue-tool-for-larynx-text-to... & https://gitlab.com/RancidBacon/larynx-dialogue/-/tree/featur...

    [2] https://github.com/rhasspy/piper

    [3] Very much no shade toward the project intended.

  • tortoise-tts

    A multi-voice TTS system trained with an emphasis on quality

  • The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).

    There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...

  • tacotron2

    Tacotron 2 - PyTorch implementation with faster-than-realtime inference

  • The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).

    There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...

  • Pink-Trombone

    A programmable version of Neil Thapen's Pink Trombone

  • Too late to edit, but to any one who needs "convincing" of the flexibility of a formant synthesizer, you should 1) play with Pink Trombone[1], a Javascript formant synthesizer with a UI that lets you graphically manipulate a vocal tract, and 2) have a look at this programmable version of it[2]

    [1] https://dood.al/pinktrombone/

    [2] https://github.com/zakaton/Pink-Trombone

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • Based on my own recent experience[0] with espeak-ng, IMO the project is currently in a really tough situation[3]:

    * the project seems to provide real value to a huge number of people who rely on it for reasons of accessibility (even more so for non-English languages); and,

    * the project is a valuable trove of knowledge about multiple languages--collected & refined over multiple decades by both linguistic specialists and everyday speakers/readers; but...

    * the project's code base is very much of "a different era" reflecting its mid-90s origins (on RISC OS, no less :) ) and a somewhat piecemeal development process over the following decades--due in part to a complex Venn diagram of skills, knowledge & familiarity required to make modifications to it.

    Perhaps the prime example of the last point is that `espeak-ng` has a hand-rolled XML parser--which attempts to handle both valid & invalid SSML markup--and markup parsing is interleaved with internal language-related parsing in the code. And this is implemented in C.

    [Aside: Due to this I would strongly caution against feeding "untrusted" input to espeak-ng in its current state but unfortunately that's what most people who rely on espeak-ng for accessibility purposes inevitably do while browsing the web.]

    [TL;DR: More detail/repros/observations on espeak-ng issues here:

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    * https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    ]

    Contributors to the project are not unaware of the issues with the code base (which are exacerbated by the difficulty of even tracing the execution flow in order to understand how the library operates) nor that it would benefit from a significant refactoring effort.

    However as is typical with such projects which greatly benefit individual humans but don't offer an opportunity to generate significant corporate financial return, a lack of developers with sufficient skill/knowledge/time to devote to a significant refactoring means a "quick workaround" for an specific individual issue is often all that can be managed.

    This is often exacerbated by outdated/unclear/missing documentation.

    IMO there are two contribution approaches that could help the project moving forward while requiring the least amount of specialist knowledge/experience:

    * Improve visibility into the code by adding logging/tracing to make it easier to see why a particular code path gets taken.

    * Integrate an existing XML parser as a "pre-processor" to ensure that only valid/"sanitized"/cleaned-up XML is passed through to the SSML parsing code--this would increase robustness/safety and facilitate future removal of XML parsing-specific workarounds from the code base (leading to less tangled control flow) and potentially future removal/replacement of the entire bespoke XML parser.

    Of course, the project is not short on ideas/suggestions for how to improve the situation but, rather, direct developer contributions so... shrug

    In light of this, last year when I was developing the personal project[0] which made use of a dependency that in turn used espeak-ng I wanted to try to contribute something more tangible than just "ideas" so began to write-up & create reproductions for some of the issues I encountered while using espeak-ng and at least document the current behaviour/issues I encountered.

    Unfortunately while doing so I kept encountering new issues which would lead to the start of yet another round of debugging to try to understand what was happening in the new case.

    Perhaps inevitably this effort eventually stalled--due to a combination of available time, a need to attempt to prioritize income generation opportunities and the downsides of living with ADHD--before I was able to share the fruits of my research. (Unfortunately I seem to be way better at discovering & root-causing bugs than I am at writing up the results...)

    However I just now used the espeak-ng project being mentioned on HN as a catalyst to at least upload some of my notes/repros to a public repo (see links in TLDR section above) in that hopes that maybe they will be useful to someone who might have the time/inclination to make a more direct code contribution to the project. (Or, you know, prompt someone to offer to fund my further efforts in this area... :) )

    [0] A personal project to "port" my "Dialogue Tool for Larynx Text To Speech" project[1] to use the more recent Piper TTS[2] system which makes use of espeak-ng for transforming text to phonemes.

    [1] https://rancidbacon.itch.io/dialogue-tool-for-larynx-text-to... & https://gitlab.com/RancidBacon/larynx-dialogue/-/tree/featur...

    [2] https://github.com/rhasspy/piper

    [3] Very much no shade toward the project intended.

  • DeepSpeech

    DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

  • As I understand it DeepSpeech is no longer actively maintained by Mozilla: https://github.com/mozilla/DeepSpeech/issues/3693

    For Text To Speech, I've found Piper TTS useful (for situations where "quality"=="realistic"/"natual"): https://github.com/rhasspy/piper

    For Speech to Text (which AIUI DeepSpeech provided), I've had some success with Vosk: https://github.com/alphacep/vosk-api

  • piper-phonemize

    C++ library for converting text to phonemes for Piper

  • Yeah, it would be nice if the financial backing behind Rhasspy/Piper led to improvements in espeak-ng too but based on my own development-related experience with the espeak-ng code base (related elsewhere in the thread) I suspect it would be significantly easier to extract the specific required text to phonemes functionality or (to a certain degree) reimplement it (or use a different project as a base[3]) than to more closely/fully integrate changes with espeak-ng itself[4]. :/

    It seems Piper currently abstracts its phonemize-related functionality with a library[0] that currently makes use of a espeak-ng fork[1].

    Unfortunately it also seems license-related issues may have an impact[2] on whether Piper continues to make use of espeak-ng.

    For your specific example of handling 1984 as a year, my understanding is that espeak-ng can handle situations like that via parameters/configuration but in my experience there can be unexpected interactions between different configuration/API options[6].

    [0] https://github.com/rhasspy/piper-phonemize

    [1] https://github.com/rhasspy/espeak-ng

    [2] https://github.com/rhasspy/piper-phonemize/issues/30#issueco...

    [3] Previously I've made note of some potential options here: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    [4] For example, as I note here[5] there's currently at least four different ways to access espeak-ng's phoneme-related functionality--and it seems that they all differ in their output, sometimes consistently and other times dependent on configuration (e.g. audio output mode, spoken punctuation) and probably also input. :/

    [5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    [6] For example, see my test cases for some other numeric-related configuration options here: https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

  • espeak-ng

    eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents. (by rhasspy)

  • Yeah, it would be nice if the financial backing behind Rhasspy/Piper led to improvements in espeak-ng too but based on my own development-related experience with the espeak-ng code base (related elsewhere in the thread) I suspect it would be significantly easier to extract the specific required text to phonemes functionality or (to a certain degree) reimplement it (or use a different project as a base[3]) than to more closely/fully integrate changes with espeak-ng itself[4]. :/

    It seems Piper currently abstracts its phonemize-related functionality with a library[0] that currently makes use of a espeak-ng fork[1].

    Unfortunately it also seems license-related issues may have an impact[2] on whether Piper continues to make use of espeak-ng.

    For your specific example of handling 1984 as a year, my understanding is that espeak-ng can handle situations like that via parameters/configuration but in my experience there can be unexpected interactions between different configuration/API options[6].

    [0] https://github.com/rhasspy/piper-phonemize

    [1] https://github.com/rhasspy/espeak-ng

    [2] https://github.com/rhasspy/piper-phonemize/issues/30#issueco...

    [3] Previously I've made note of some potential options here: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...

    [4] For example, as I note here[5] there's currently at least four different ways to access espeak-ng's phoneme-related functionality--and it seems that they all differ in their output, sometimes consistently and other times dependent on configuration (e.g. audio output mode, spoken punctuation) and probably also input. :/

    [5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

    [6] For example, see my test cases for some other numeric-related configuration options here: https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...

  • nvda

    NVDA, the free and open source Screen Reader for Microsoft Windows

  • After some brief research it seems the issue you're seeing may be a known bug in at least some versions/release of espeak-ng.

    Here's some potentially related links if you'd like to dig deeper:

    * "questions about mandarin data packet #1044": https://github.com/espeak-ng/espeak-ng/issues/1044

    * "ESpeak NJ-1.51’s Mandarin pronunciation is corrupted #12952": https://github.com/nvaccess/nvda/issues/12952

    * "The pronunciation of Mandarin Chinese using ESpeak NJ in NVDA is not normal #1028": https://github.com/espeak-ng/espeak-ng/issues/1028

    * "When espeak-ng translates Chinese (cmn), IPA tone symbols are not output correctly #305": https://github.com/rhasspy/piper/issues/305

    * "Please default ESpeak NG's voice role to 'Chinese (Mandarin, latin as Pinyin)' for Chinese to fix #12952 #13572": https://github.com/nvaccess/nvda/issues/13572

    * "Cmn voice not correctly translated #1370": https://github.com/espeak-ng/espeak-ng/issues/1370

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Ask HN: Open-source, local Text-to-Speech (TTS) generators

    2 projects | news.ycombinator.com | 7 May 2024
  • WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper

    9 projects | news.ycombinator.com | 17 Jan 2024
  • [P] Making a TTS voice, HK-47 from Kotor using Tortoise (Ideally WaveRNN)

    2 projects | /r/MachineLearning | 6 Jul 2023
  • Is there a good text to speech program for linux?

    6 projects | /r/linux | 22 Jun 2023
  • Vietnamese Phonology

    1 project | /r/VulgarLang | 22 Jun 2023